Thread: performance question (something to do w/ parameterized stmts?, wrong index types?)

Why does this query take so long?  (PostgreSQL 8.0.3, FC4)
Hopefully I have provided enough information below.

LOG:  statement: SELECT * FROM x WHERE f IN
($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\

$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63\

,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87,$88,$89,$90,$91,$92,$93,$94,$95,$96,$97,$98,$99,$100,$101,\

$102,$103,$104,$105,$106,$107,$108,$109,$110,$111,$112,$113,$114,$115,$116,$117,$118,$119,$120,$121,$122,$123,$124,$125,$126,$127,$128,$129,$130,$131,$132,\

$133,$134,$135,$136,$137,$138,$139,$140,$141,$142,$143,$144,$145,$146,$147,$148,$149,$150,$151,$152,$153,$154,$155,$156,$157,$158,$159,$160,$161,$162,$163,\

$164,$165,$166,$167,$168,$169,$170,$171,$172,$173,$174,$175,$176,$177,$178,$179,$180,$181,$182,$183,$184,$185,$186,$187,$188,$189,$190,$191,$192,$193,$194,\

$195,$196,$197,$198,$199,$200,$201,$202,$203,$204,$205,$206,$207,$208,$209,$210,$211,$212,$213,$214,$215,$216,$217,$218,$219,$220,$221,$222,$223,$224,$225,\

$226,$227,$228,$229,$230,$231,$232,$233,$234,$235,$236,$237,$238,$239,$240,$241,$242,$243,$244,$245,$246,$247,$248,$249,$250,$251,$252,$253,$254,$255,$256,\

$257,$258,$259,$260,$261,$262,$263,$264,$265,$266,$267,$268,$269,$270,$271,$272,$273,$274,$275,$276,$277,$278,$279,$280,$281,$282,$283,$284,$285,$286,$287,\

$288,$289,$290,$291,$292,$293,$294,$295,$296,$297,$298,$299,$300,$301,$302,$303,$304,$305,$306,$307,$308,$309,$310,$311,$312,$313,$314,$315,$316,$317,$318,\

$319,$320,$321,$322,$323,$324,$325,$326,$327,$328,$329,$330,$331,$332,$333,$334,$335,$336,$337,$338,$339,$340,$341,$342,$343,$344,$345,$346,$347,$348,$349,\

$350,$351,$352,$353,$354,$355,$356,$357,$358,$359,$360,$361,$362,$363,$364,$365,$366,$367,$368,$369,$370,$371,$372,$373,$374,$375,$376,$377,$378,$379,$380,\

$381,$382,$383,$384,$385,$386,$387,$388,$389,$390,$391,$392,$393,$394,$395,$396,$397,$398,$399,$400,$401,$402,$403,$404,$405,$406,$407,$408,$409,$410,$411,\

$412,$413,$414,$415,$416,$417,$418,$419,$420,$421,$422,$423,$424,$425,$426,$427,$428,$429,$430,$431,$432,$433,$434,$435,$436,$437,$438,$439,$440,$441,$442,\

$443,$444,$445,$446,$447,$448,$449,$450,$451,$452,$453,$454,$455,$456,$457,$458,$459,$460,$461,$462,$463,$464,$465,$466,$467,$468,$469,$470,$471,$472,$473,\

$474,$475,$476,$477,$478,$479,$480,$481,$482,$483,$484,$485,$486,$487,$488,$489,$490,$491,$492,$493,$494,$495,$496,$497,$498,$499,$500,$501,$502,$503,$504,\

$505,$506,$507,$508,$509,$510,$511,$512,$513,$514,$515,$516,$517,$518,$519,$520,$521,$522,$523,$524,$525,$526,$527,$528,$529,$530,$531,$532,$533,$534,$535,\

$536,$537,$538,$539,$540,$541,$542,$543,$544,$545,$546,$547,$548,$549,$550,$551,$552,$553,$554,$555,$556,$557,$558,$559,$560,$561,$562,$563,$564,$565,$566,\

$567,$568,$569,$570,$571,$572,$573,$574,$575,$576,$577,$578,$579,$580,$581,$582,$583,$584,$585,$586,$587,$588,$589,$590,$591,$592,$593,$594,$595,$596,$597,\

$598,$599,$600,$601,$602,$603,$604,$605,$606,$607,$608,$609,$610,$611,$612,$613,$614,$615,$616,$617,$618,$619,$620,$621,$622,$623,$624,$625,$626,$627,$628,\
$629,$630,$631,$632,$633,$634,$635,$636,$637,$638,$639,$640,$641,$642,$643,$644,$645,$646,$647,$648,$649,$650)
ORDER BY f,c

LOG:  EXECUTOR STATISTICS
DETAIL:  ! system usage stats:
         !       10.282945 elapsed 10.234444 user 0.048992 system sec
         !       [25.309152 user 0.500923 sys total]
         !       0/0 [0/0] filesystem blocks in/out
         !       0/0 [0/10397] page faults/reclaims, 0 [0] swaps
         !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
         !       0/15 [291/55] voluntary/involuntary context switches
         ! buffer usage stats:
         !       Shared blocks:          0 read,          0 written,
buffer hit rate = 100.00%
         !       Local  blocks:          0 read,          0 written,
buffer hit rate = 0.00%
         !       Direct blocks:          0 read,          0 written


Here is the table description:

Table "public.x"
  Column |  Type   | Modifiers
--------+---------+-----------
  f      | integer | not null
  c      | integer | not null
  r      | integer | not null
  n      | integer | not null
Indexes:
     "x_c_idx" btree (c)
     "x_f_idx" btree (f)
     "testindex2" btree (f, c)


There are only 2,369 records in the X table.

I don't understand why this query should take 10 seconds in the executor
phase, with so little data being managed, and all relevant data already
in memory.  Any clues?

Maybe there are more database server debugging options I should have
tweaked, but I'm not sure what.  The stuff I turned on included:

log_duration = true
log_statement = 'all'
log_parser_stats = true
log_planner_stats = true
log_executor_stats = true

   (N.B. log_statement_stats = true caused the server startup failure
every time with no error message I could find, so was not deliberately set)

stats_start_collector = true
stats_command_string = true
stats_block_level = true
stats_row_level = true
stats_reset_on_server_start = true



(FYI: 10 secs is a lot only because this query is executed many times in
my application, and they're pretty much all bad, and the aggregate query
times are killing my app response).

Thanks for any tips!

What's EXPLAIN ANALYZE show?

On Mon, May 08, 2006 at 01:29:28PM -0400, Jeffrey Tenny wrote:
> Why does this query take so long?  (PostgreSQL 8.0.3, FC4)
> Hopefully I have provided enough information below.
>
> LOG:  statement: SELECT * FROM x WHERE f IN
> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\
>
$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63\
>
,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87,$88,$89,$90,$91,$92,$93,$94,$95,$96,$97,$98,$99,$100,$101,\
>
$102,$103,$104,$105,$106,$107,$108,$109,$110,$111,$112,$113,$114,$115,$116,$117,$118,$119,$120,$121,$122,$123,$124,$125,$126,$127,$128,$129,$130,$131,$132,\
>
$133,$134,$135,$136,$137,$138,$139,$140,$141,$142,$143,$144,$145,$146,$147,$148,$149,$150,$151,$152,$153,$154,$155,$156,$157,$158,$159,$160,$161,$162,$163,\
>
$164,$165,$166,$167,$168,$169,$170,$171,$172,$173,$174,$175,$176,$177,$178,$179,$180,$181,$182,$183,$184,$185,$186,$187,$188,$189,$190,$191,$192,$193,$194,\
>
$195,$196,$197,$198,$199,$200,$201,$202,$203,$204,$205,$206,$207,$208,$209,$210,$211,$212,$213,$214,$215,$216,$217,$218,$219,$220,$221,$222,$223,$224,$225,\
>
$226,$227,$228,$229,$230,$231,$232,$233,$234,$235,$236,$237,$238,$239,$240,$241,$242,$243,$244,$245,$246,$247,$248,$249,$250,$251,$252,$253,$254,$255,$256,\
>
$257,$258,$259,$260,$261,$262,$263,$264,$265,$266,$267,$268,$269,$270,$271,$272,$273,$274,$275,$276,$277,$278,$279,$280,$281,$282,$283,$284,$285,$286,$287,\
>
$288,$289,$290,$291,$292,$293,$294,$295,$296,$297,$298,$299,$300,$301,$302,$303,$304,$305,$306,$307,$308,$309,$310,$311,$312,$313,$314,$315,$316,$317,$318,\
>
$319,$320,$321,$322,$323,$324,$325,$326,$327,$328,$329,$330,$331,$332,$333,$334,$335,$336,$337,$338,$339,$340,$341,$342,$343,$344,$345,$346,$347,$348,$349,\
>
$350,$351,$352,$353,$354,$355,$356,$357,$358,$359,$360,$361,$362,$363,$364,$365,$366,$367,$368,$369,$370,$371,$372,$373,$374,$375,$376,$377,$378,$379,$380,\
>
$381,$382,$383,$384,$385,$386,$387,$388,$389,$390,$391,$392,$393,$394,$395,$396,$397,$398,$399,$400,$401,$402,$403,$404,$405,$406,$407,$408,$409,$410,$411,\
>
$412,$413,$414,$415,$416,$417,$418,$419,$420,$421,$422,$423,$424,$425,$426,$427,$428,$429,$430,$431,$432,$433,$434,$435,$436,$437,$438,$439,$440,$441,$442,\
>
$443,$444,$445,$446,$447,$448,$449,$450,$451,$452,$453,$454,$455,$456,$457,$458,$459,$460,$461,$462,$463,$464,$465,$466,$467,$468,$469,$470,$471,$472,$473,\
>
$474,$475,$476,$477,$478,$479,$480,$481,$482,$483,$484,$485,$486,$487,$488,$489,$490,$491,$492,$493,$494,$495,$496,$497,$498,$499,$500,$501,$502,$503,$504,\
>
$505,$506,$507,$508,$509,$510,$511,$512,$513,$514,$515,$516,$517,$518,$519,$520,$521,$522,$523,$524,$525,$526,$527,$528,$529,$530,$531,$532,$533,$534,$535,\
>
$536,$537,$538,$539,$540,$541,$542,$543,$544,$545,$546,$547,$548,$549,$550,$551,$552,$553,$554,$555,$556,$557,$558,$559,$560,$561,$562,$563,$564,$565,$566,\
>
$567,$568,$569,$570,$571,$572,$573,$574,$575,$576,$577,$578,$579,$580,$581,$582,$583,$584,$585,$586,$587,$588,$589,$590,$591,$592,$593,$594,$595,$596,$597,\
>
$598,$599,$600,$601,$602,$603,$604,$605,$606,$607,$608,$609,$610,$611,$612,$613,$614,$615,$616,$617,$618,$619,$620,$621,$622,$623,$624,$625,$626,$627,$628,\
> $629,$630,$631,$632,$633,$634,$635,$636,$637,$638,$639,$640,$641,$642,$643,$644,$645,$646,$647,$648,$649,$650)
> ORDER BY f,c
>
> LOG:  EXECUTOR STATISTICS
> DETAIL:  ! system usage stats:
>         !       10.282945 elapsed 10.234444 user 0.048992 system sec
>         !       [25.309152 user 0.500923 sys total]
>         !       0/0 [0/0] filesystem blocks in/out
>         !       0/0 [0/10397] page faults/reclaims, 0 [0] swaps
>         !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
>         !       0/15 [291/55] voluntary/involuntary context switches
>         ! buffer usage stats:
>         !       Shared blocks:          0 read,          0 written,
> buffer hit rate = 100.00%
>         !       Local  blocks:          0 read,          0 written,
> buffer hit rate = 0.00%
>         !       Direct blocks:          0 read,          0 written
>
>
> Here is the table description:
>
> Table "public.x"
>  Column |  Type   | Modifiers
> --------+---------+-----------
>  f      | integer | not null
>  c      | integer | not null
>  r      | integer | not null
>  n      | integer | not null
> Indexes:
>     "x_c_idx" btree (c)
>     "x_f_idx" btree (f)
>     "testindex2" btree (f, c)
>
>
> There are only 2,369 records in the X table.
>
> I don't understand why this query should take 10 seconds in the executor
> phase, with so little data being managed, and all relevant data already
> in memory.  Any clues?
>
> Maybe there are more database server debugging options I should have
> tweaked, but I'm not sure what.  The stuff I turned on included:
>
> log_duration = true
> log_statement = 'all'
> log_parser_stats = true
> log_planner_stats = true
> log_executor_stats = true
>
>   (N.B. log_statement_stats = true caused the server startup failure
> every time with no error message I could find, so was not deliberately set)
>
> stats_start_collector = true
> stats_command_string = true
> stats_block_level = true
> stats_row_level = true
> stats_reset_on_server_start = true
>
>
>
> (FYI: 10 secs is a lot only because this query is executed many times in
> my application, and they're pretty much all bad, and the aggregate query
> times are killing my app response).
>
> Thanks for any tips!
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
>

--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: performance question (something to do w/

From
Mark Lewis
Date:
Doing a SELECT with a large list of variables inside an IN runs slowly
on every database we've tested.  We've tested mostly in Oracle and
PostgreSQL, and both get very slow very quickly (actually Oracle refuses
to process the query at all after it gets too many bind parameters).

In our case, we have a (potentially quite large) set of external values
that we want to look up in the database.  We originally thought that
doing a single select with a large IN clause was the way to go, but then
we did some performance analysis on the optimal batch size (number of
items to include per IN clause), and discovered that for most databases,
the optimal batch size was 1.  For PostgreSQL I think it was 2.

The moral of the story is that you're probably better off running a
bunch of small selects than in trying to optimize things with one
gargantuan select.

-- Mark Lewis

On Mon, 2006-05-08 at 13:29 -0400, Jeffrey Tenny wrote:
> Why does this query take so long?  (PostgreSQL 8.0.3, FC4)
> Hopefully I have provided enough information below.
>
> LOG:  statement: SELECT * FROM x WHERE f IN
> ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,\
>
$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35,$36,$37,$38,$39,$40,$41,$42,$43,$44,$45,$46,$47,$48,$49,$50,$51,$52,$53,$54,$55,$56,$57,$58,$59,$60,$61,$62,$63\
>
,$64,$65,$66,$67,$68,$69,$70,$71,$72,$73,$74,$75,$76,$77,$78,$79,$80,$81,$82,$83,$84,$85,$86,$87,$88,$89,$90,$91,$92,$93,$94,$95,$96,$97,$98,$99,$100,$101,\
>
$102,$103,$104,$105,$106,$107,$108,$109,$110,$111,$112,$113,$114,$115,$116,$117,$118,$119,$120,$121,$122,$123,$124,$125,$126,$127,$128,$129,$130,$131,$132,\
>
$133,$134,$135,$136,$137,$138,$139,$140,$141,$142,$143,$144,$145,$146,$147,$148,$149,$150,$151,$152,$153,$154,$155,$156,$157,$158,$159,$160,$161,$162,$163,\
>
$164,$165,$166,$167,$168,$169,$170,$171,$172,$173,$174,$175,$176,$177,$178,$179,$180,$181,$182,$183,$184,$185,$186,$187,$188,$189,$190,$191,$192,$193,$194,\
>
$195,$196,$197,$198,$199,$200,$201,$202,$203,$204,$205,$206,$207,$208,$209,$210,$211,$212,$213,$214,$215,$216,$217,$218,$219,$220,$221,$222,$223,$224,$225,\
>
$226,$227,$228,$229,$230,$231,$232,$233,$234,$235,$236,$237,$238,$239,$240,$241,$242,$243,$244,$245,$246,$247,$248,$249,$250,$251,$252,$253,$254,$255,$256,\
>
$257,$258,$259,$260,$261,$262,$263,$264,$265,$266,$267,$268,$269,$270,$271,$272,$273,$274,$275,$276,$277,$278,$279,$280,$281,$282,$283,$284,$285,$286,$287,\
>
$288,$289,$290,$291,$292,$293,$294,$295,$296,$297,$298,$299,$300,$301,$302,$303,$304,$305,$306,$307,$308,$309,$310,$311,$312,$313,$314,$315,$316,$317,$318,\
>
$319,$320,$321,$322,$323,$324,$325,$326,$327,$328,$329,$330,$331,$332,$333,$334,$335,$336,$337,$338,$339,$340,$341,$342,$343,$344,$345,$346,$347,$348,$349,\
>
$350,$351,$352,$353,$354,$355,$356,$357,$358,$359,$360,$361,$362,$363,$364,$365,$366,$367,$368,$369,$370,$371,$372,$373,$374,$375,$376,$377,$378,$379,$380,\
>
$381,$382,$383,$384,$385,$386,$387,$388,$389,$390,$391,$392,$393,$394,$395,$396,$397,$398,$399,$400,$401,$402,$403,$404,$405,$406,$407,$408,$409,$410,$411,\
>
$412,$413,$414,$415,$416,$417,$418,$419,$420,$421,$422,$423,$424,$425,$426,$427,$428,$429,$430,$431,$432,$433,$434,$435,$436,$437,$438,$439,$440,$441,$442,\
>
$443,$444,$445,$446,$447,$448,$449,$450,$451,$452,$453,$454,$455,$456,$457,$458,$459,$460,$461,$462,$463,$464,$465,$466,$467,$468,$469,$470,$471,$472,$473,\
>
$474,$475,$476,$477,$478,$479,$480,$481,$482,$483,$484,$485,$486,$487,$488,$489,$490,$491,$492,$493,$494,$495,$496,$497,$498,$499,$500,$501,$502,$503,$504,\
>
$505,$506,$507,$508,$509,$510,$511,$512,$513,$514,$515,$516,$517,$518,$519,$520,$521,$522,$523,$524,$525,$526,$527,$528,$529,$530,$531,$532,$533,$534,$535,\
>
$536,$537,$538,$539,$540,$541,$542,$543,$544,$545,$546,$547,$548,$549,$550,$551,$552,$553,$554,$555,$556,$557,$558,$559,$560,$561,$562,$563,$564,$565,$566,\
>
$567,$568,$569,$570,$571,$572,$573,$574,$575,$576,$577,$578,$579,$580,$581,$582,$583,$584,$585,$586,$587,$588,$589,$590,$591,$592,$593,$594,$595,$596,$597,\
>
$598,$599,$600,$601,$602,$603,$604,$605,$606,$607,$608,$609,$610,$611,$612,$613,$614,$615,$616,$617,$618,$619,$620,$621,$622,$623,$624,$625,$626,$627,$628,\
> $629,$630,$631,$632,$633,$634,$635,$636,$637,$638,$639,$640,$641,$642,$643,$644,$645,$646,$647,$648,$649,$650)
> ORDER BY f,c
>
> LOG:  EXECUTOR STATISTICS
> DETAIL:  ! system usage stats:
>          !       10.282945 elapsed 10.234444 user 0.048992 system sec
>          !       [25.309152 user 0.500923 sys total]
>          !       0/0 [0/0] filesystem blocks in/out
>          !       0/0 [0/10397] page faults/reclaims, 0 [0] swaps
>          !       0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
>          !       0/15 [291/55] voluntary/involuntary context switches
>          ! buffer usage stats:
>          !       Shared blocks:          0 read,          0 written,
> buffer hit rate = 100.00%
>          !       Local  blocks:          0 read,          0 written,
> buffer hit rate = 0.00%
>          !       Direct blocks:          0 read,          0 written
>
>
> Here is the table description:
>
> Table "public.x"
>   Column |  Type   | Modifiers
> --------+---------+-----------
>   f      | integer | not null
>   c      | integer | not null
>   r      | integer | not null
>   n      | integer | not null
> Indexes:
>      "x_c_idx" btree (c)
>      "x_f_idx" btree (f)
>      "testindex2" btree (f, c)
>
>
> There are only 2,369 records in the X table.
>
> I don't understand why this query should take 10 seconds in the executor
> phase, with so little data being managed, and all relevant data already
> in memory.  Any clues?
>
> Maybe there are more database server debugging options I should have
> tweaked, but I'm not sure what.  The stuff I turned on included:
>
> log_duration = true
> log_statement = 'all'
> log_parser_stats = true
> log_planner_stats = true
> log_executor_stats = true
>
>    (N.B. log_statement_stats = true caused the server startup failure
> every time with no error message I could find, so was not deliberately set)
>
> stats_start_collector = true
> stats_command_string = true
> stats_block_level = true
> stats_row_level = true
> stats_reset_on_server_start = true
>
>
>
> (FYI: 10 secs is a lot only because this query is executed many times in
> my application, and they're pretty much all bad, and the aggregate query
> times are killing my app response).
>
> Thanks for any tips!
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings

Re: performance question (something to do w/

From
"Jim C. Nasby"
Date:
On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:
> Doing a SELECT with a large list of variables inside an IN runs slowly
> on every database we've tested.  We've tested mostly in Oracle and
> PostgreSQL, and both get very slow very quickly (actually Oracle refuses
> to process the query at all after it gets too many bind parameters).
>
> In our case, we have a (potentially quite large) set of external values
> that we want to look up in the database.  We originally thought that
> doing a single select with a large IN clause was the way to go, but then
> we did some performance analysis on the optimal batch size (number of
> items to include per IN clause), and discovered that for most databases,
> the optimal batch size was 1.  For PostgreSQL I think it was 2.
>
> The moral of the story is that you're probably better off running a
> bunch of small selects than in trying to optimize things with one
> gargantuan select.

Ever experiment with loading the parameters into a temp table and
joining to that?
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: performance question (something to do w/

From
Tom Lane
Date:
"Jim C. Nasby" <jnasby@pervasive.com> writes:
> On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:
>> Doing a SELECT with a large list of variables inside an IN runs slowly
>> on every database we've tested.  We've tested mostly in Oracle and
>> PostgreSQL, and both get very slow very quickly (actually Oracle refuses
>> to process the query at all after it gets too many bind parameters).
>>
>> In our case, we have a (potentially quite large) set of external values
>> that we want to look up in the database.  We originally thought that
>> doing a single select with a large IN clause was the way to go, but then
>> we did some performance analysis on the optimal batch size (number of
>> items to include per IN clause), and discovered that for most databases,
>> the optimal batch size was 1.  For PostgreSQL I think it was 2.
>>
>> The moral of the story is that you're probably better off running a
>> bunch of small selects than in trying to optimize things with one
>> gargantuan select.

> Ever experiment with loading the parameters into a temp table and
> joining to that?

Also, it might be worth re-testing that conclusion with PG CVS tip
(or 8.2 when it comes out).  The reimplementation of IN as = ANY that
I did a couple months ago might well change the results.

            regards, tom lane

Re: performance question (something to do w/ parameterized

From
Jeffrey Tenny
Date:
Well, since I don't know the exact parameter values, just substituting
1-650 for $1-$650, I get:

  Index Scan using testindex2 on x  (cost=0.00..34964.52 rows=1503
width=16) (actual time=0.201..968.252 rows=677 loops=1)
    Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...

So index usage is presumably good on this one.

Jim C. Nasby wrote:
> What's EXPLAIN ANALYZE show?
>

Re: performance question (something to do w/ parameterized

From
Jeffrey Tenny
Date:
Mark Lewis wrote:
> Doing a SELECT with a large list of variables inside an IN runs slowly
> on every database we've tested.  We've tested mostly in Oracle and
> PostgreSQL, and both get very slow very quickly (actually Oracle refuses
> to process the query at all after it gets too many bind parameters).
>
> In our case, we have a (potentially quite large) set of external values
> that we want to look up in the database.  We originally thought that
> doing a single select with a large IN clause was the way to go, but then
> we did some performance analysis on the optimal batch size (number of
> items to include per IN clause), and discovered that for most databases,
> the optimal batch size was 1.  For PostgreSQL I think it was 2.

So that is for parameterized queries (the batch size?).

In my case, I was concerned about latency between app and database
server, so I try to minimize the number of queries I send to the
database server.  (My app servers can be anywhere, they /should/ be
close to the database server, but there are no guarantees and I can't
control it).

The last time I tested for optimal batch size using non-parameterized
queries with same-host database and app, I got a batch size of
approximately 700 IN list elements (again, not variables in that test).
That was on postgres 7.X.Y.

Guess I'll have to try a test where I turn the parameterized statements
into regular statements.

I'm pretty sure it would be a bad idea for me to send one IN list
element at a time in all cases.  Even if the query query prep was fast,
the network latency could kill my app.

>
> The moral of the story is that you're probably better off running a
> bunch of small selects than in trying to optimize things with one
> gargantuan select.

The algorithm currently tries to ensure that IN-lists of not more than
700 elements are sent to the database server, and breaks them into
multiple queries.  If it has to break it into at least 3 queries, it
uses parameterized statements for the first 2+ and then a
non-parameterized statement for the last one (which may have a different
number of IN list elements than the prior batches).

Re: performance question (something to do w/ parameterized

From
Tom Lane
Date:
Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
> Well, since I don't know the exact parameter values, just substituting
> 1-650 for $1-$650, I get:

>   Index Scan using testindex2 on x  (cost=0.00..34964.52 rows=1503
> width=16) (actual time=0.201..968.252 rows=677 loops=1)
>     Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...

> So index usage is presumably good on this one.

No, that's not a very nice plan at all --- the key thing to notice is
it says Filter: not Index Cond:.  What you've actually got here is a
full-index scan over testindex2 (I guess it's doing that to achieve the
requested sort order), then computation of a 650-way boolean OR expression
for each row of the table.  Ugh.

The other way of doing this would involve 650 separate index probes and
then sorting the result.  Which would be pretty expensive too, but just
counting on my fingers it seems like that ought to come out at less than
the 35000 cost units for this plan.  The planner evidently is coming up
with a different answer though.  You might try dropping testindex2
(which I suppose is an index on (f,c)) so that it has only an index on
f to play with, and see what plan it picks and what the estimated/actual
costs are.

            regards, tom lane

Re: performance question (something to do w/ parameterized

From
Jeffrey Tenny
Date:
The original set of indexes were:

Indexes:
     "x_c_idx" btree (c)
     "x_f_idx" btree (f)
     "testindex2" btree (f, c)

I dropped the multicolumn index 'testindex2',  and a new explain analyze
looks like this:

  Sort  (cost=35730.71..35768.28 rows=1503 width=16) (actual
time=962.555..964.467 rows=677 loops=1)
    Sort Key: f, c
    ->  Seq Scan on x  (cost=0.00..34937.60 rows=1503 width=16) (actual
time=5.449..956.594 rows=677 loops=1)
          Filter: ((f = 1) OR (f = 2) OR (f = 3) ...


Turning on the server debugging again, I got roughly identical
query times with and without the two column index.
It appears to have ignored the other indexes completely.


Tom Lane wrote:
> Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
>> Well, since I don't know the exact parameter values, just substituting
>> 1-650 for $1-$650, I get:
>
>>   Index Scan using testindex2 on x  (cost=0.00..34964.52 rows=1503
>> width=16) (actual time=0.201..968.252 rows=677 loops=1)
>>     Filter: ((f = 1) OR (f = 2) OR (f = 3) OR (f = 4) ...
>
>> So index usage is presumably good on this one.
>
> No, that's not a very nice plan at all --- the key thing to notice is
> it says Filter: not Index Cond:.  What you've actually got here is a
> full-index scan over testindex2 (I guess it's doing that to achieve the
> requested sort order), then computation of a 650-way boolean OR expression
> for each row of the table.  Ugh.
>
> The other way of doing this would involve 650 separate index probes and
> then sorting the result.  Which would be pretty expensive too, but just
> counting on my fingers it seems like that ought to come out at less than
> the 35000 cost units for this plan.  The planner evidently is coming up
> with a different answer though.  You might try dropping testindex2
> (which I suppose is an index on (f,c)) so that it has only an index on
> f to play with, and see what plan it picks and what the estimated/actual
> costs are.
>
>             regards, tom lane
>

Re: performance question (something to do w/ parameterized

From
Tom Lane
Date:
Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
> I dropped the multicolumn index 'testindex2',  and a new explain analyze
> looks like this:

>   Sort  (cost=35730.71..35768.28 rows=1503 width=16) (actual
> time=962.555..964.467 rows=677 loops=1)
>     Sort Key: f, c
>     ->  Seq Scan on x  (cost=0.00..34937.60 rows=1503 width=16) (actual
> time=5.449..956.594 rows=677 loops=1)
>           Filter: ((f = 1) OR (f = 2) OR (f = 3) ...

> Turning on the server debugging again, I got roughly identical
> query times with and without the two column index.

That's good, actually, seeing that the planner thinks they're close to
the same speed too.  Now try "set enable_seqscan = off" to see if you
can force the multi-index-scan plan to be chosen, and see how that does.

            regards, tom lane

Re: performance question (something to do w/ parameterized

From
Jeffrey Tenny
Date:
I tried the seqscan disabling and got what sounds like the desired plan:

Sort  (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)
    Sort Key: f, c
    ->  Index Scan using x_f_idx, x_f_idx, ...
    (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)
    Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....


I turned off the option in postgresql.conf and it did indeed improve all similar queries on that table
to have sub-second response time, down from 6/8/10 second responses.  And the elapsed time for
the application action reflected this improvement.

So that begs two questions:

1) is there a way to enable that for a single query in a multi-query transaction?

2) am I opening a can of worms if I turn it off server-wide?  (PROBABLY!)

I've already had to tune the server to account for the fact that
the database is easily cached in memory but the processors are slow. (PIII 550Mhz Xeons)
I've lowered the cost of random pages and raised the cost of per-row processing
as follows (where the configuration defaults are also noted):

# - Planner Cost Constants -

#JDT: default effective_cache_size = 1000       # typically 8KB each
effective_cache_size = 50000    # typically 8KB each
#JDT: default: random_page_cost = 4             # units are one sequential page fetch cost
random_page_cost = 2            # units are one sequential page fetch cost
#JDT: default: cpu_tuple_cost = 0.01            # (same)
cpu_tuple_cost = 0.10           # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#JDT: default: cpu_operator_cost = 0.0025       # (same)
cpu_operator_cost = 0.025       # (same)


Any suggestion for how to fix today's query (turning seqscan off) without wrecking others is welcome, as well as
whetherI've 
blundered on the above (which may or may not be optimal, but definitely fixed some former problem queries
on that machine).

My transactions are large multi-query serializable transactions, so it's also important that any single-query targeting
optimization 
not affect other queries in the same transaction.

Thanks for the help.

Tom Lane wrote:
> Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
>> I dropped the multicolumn index 'testindex2',  and a new explain analyze
>> looks like this:
>
>>   Sort  (cost=35730.71..35768.28 rows=1503 width=16) (actual
>> time=962.555..964.467 rows=677 loops=1)
>>     Sort Key: f, c
>>     ->  Seq Scan on x  (cost=0.00..34937.60 rows=1503 width=16) (actual
>> time=5.449..956.594 rows=677 loops=1)
>>           Filter: ((f = 1) OR (f = 2) OR (f = 3) ...
>
>> Turning on the server debugging again, I got roughly identical
>> query times with and without the two column index.
>
> That's good, actually, seeing that the planner thinks they're close to
> the same speed too.  Now try "set enable_seqscan = off" to see if you
> can force the multi-index-scan plan to be chosen, and see how that does.
>
>             regards, tom lane
>

Re: performance question (something to do w/ parameterized

From
Jeffrey Tenny
Date:
re my question here:  what would be the JDBC-proper technique,
my app is all jdbc.

Jeffrey Tenny wrote:
> 1) is there a way to enable that for a single query in a multi-query
> transaction?

Re: performance question (something to do w/ parameterized

From
Tom Lane
Date:
Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
> I tried the seqscan disabling and got what sounds like the desired plan:
> Sort  (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)
>     Sort Key: f, c
>     ->  Index Scan using x_f_idx, x_f_idx, ...
>     (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)
>     Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....

Hm, vs 35000 or so estimates for the slower plans.  My recommendation
would be to decrease random_page_cost to 2 or so, instead of the brute
force disable-seqscans approach.

            regards, tom lane

Re: performance question (something to do w/ parameterized

From
Jeffrey Tenny
Date:
Tom Lane wrote:
 > Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
 >> I tried the seqscan disabling and got what sounds like the desired plan:
 >> Sort  (cost=54900.62..54940.29 rows=1587 width=16) (actual time=20.208..22.138 rows=677 loops=1)
 >>     Sort Key: f, c
 >>     ->  Index Scan using x_f_idx, x_f_idx, ...
 >>     (cost=0.00..54056.96 rows=1587 width=16) (actual time=1.048..15.598 rows=677 loops=1)
 >>     Index Cond: ((f = 1) OR (f = 2) OR (f = 3) ....
 >
 > Hm, vs 35000 or so estimates for the slower plans.  My recommendation
 > would be to decrease random_page_cost to 2 or so, instead of the brute
 > force disable-seqscans approach.

The server was already running with random_page_cost=2 today for all tests, because of
the mods I've made to improve other problem queries in the past (my settings noted below, and
before in another msg on this topic).

So to nail this particular query something additional is required (even lower random_page_cost?).
What's a good value for slower processors/memory and database in memory?
1?  .5?

Just curious:
Has anybody ever done an exercise that generates postgresql defaults that are customized based on the
cpu, memory, architecture, bus speeds, etc?
These old PIII xeons are quite a bit different than the newer AMD chips I use for postgres,
and the tuning of the postgresql.conf parameters has been very effective in using the old xeons, but it seems like
there
must be a general knowledge base of what's generically more appropriate for some types of hardware
that would give people
better initial defaults for a given platform.  I know, step right up and do it :-)

Here's the postgresql defaults and actual settings I used for all tests today (from my production server):

> I've already had to tune the server to account for the fact that
> the database is easily cached in memory but the processors are slow. (PIII 550Mhz Xeons)
> I've lowered the cost of random pages and raised the cost of per-row processing
> as follows (where the configuration defaults are also noted):
>
> # - Planner Cost Constants -
>
> #JDT: default effective_cache_size = 1000       # typically 8KB each
> effective_cache_size = 50000    # typically 8KB each
> #JDT: default: random_page_cost = 4             # units are one sequential page fetch cost
> random_page_cost = 2            # units are one sequential page fetch cost
> #JDT: default: cpu_tuple_cost = 0.01            # (same)
> cpu_tuple_cost = 0.10           # (same)
> #cpu_index_tuple_cost = 0.001   # (same)
> #JDT: default: cpu_operator_cost = 0.0025       # (same)
> cpu_operator_cost = 0.025       # (same)




Re: performance question (something to do w/ parameterized

From
Tom Lane
Date:
Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
> The server was already running with random_page_cost=2 today for all tests, because of
> the mods I've made to improve other problem queries in the past (my settings noted below, and
> before in another msg on this topic).

> So to nail this particular query something additional is required (even lower random_page_cost?).
> What's a good value for slower processors/memory and database in memory?

If you're pretty sure the database will always be RAM-resident, then 1.0
is the theoretically correct value.

            regards, tom lane

Re: performance question (something to do w/ parameterized

From
Klint Gore
Date:
On Mon, 08 May 2006 19:37:37 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Jeffrey Tenny <jeffrey.tenny@comcast.net> writes:
> > The server was already running with random_page_cost=2 today for all tests, because of
> > the mods I've made to improve other problem queries in the past (my settings noted below, and
> > before in another msg on this topic).
>
> > So to nail this particular query something additional is required (even lower random_page_cost?).
> > What's a good value for slower processors/memory and database in memory?
>
> If you're pretty sure the database will always be RAM-resident, then 1.0
> is the theoretically correct value.

Would it be possible to craft a set of queries on specific data that
could advise a reasonable value for random_page_cost?

What sort of data distribution and query type would be heavily dependant
on random_page_cost?  i.e. randomness of the data, size of the data
compared to physical memory.

klint.

+---------------------------------------+-----------------+
: Klint Gore                            : "Non rhyming    :
: EMail   : kg@kgb.une.edu.au           :  slang - the    :
: Snail   : A.B.R.I.                    :  possibilities  :
: Mail      University of New England   :  are useless"   :
:           Armidale NSW 2351 Australia :     L.J.J.      :
: Fax     : +61 2 6772 5376             :                 :
+---------------------------------------+-----------------+

Big IN() clauses etc : feature proposal

From
PFC
Date:
>>> The moral of the story is that you're probably better off running a
>>> bunch of small selects than in trying to optimize things with one
>>> gargantuan select.
>
>> Ever experiment with loading the parameters into a temp table and
>> joining to that?
>
> Also, it might be worth re-testing that conclusion with PG CVS tip
> (or 8.2 when it comes out).  The reimplementation of IN as = ANY that
> I did a couple months ago might well change the results.

    Long mail, but I think it's interesting...

    I think this is a generic problem, which is often encountered : selecting
a bunch of records based on a list of primary keys (or other indexed,
unique field) ; said list being anything from very short to quite large.
    Here are a few occurences of this need :

    1- The application supplies a list of id's (the case of the OP of this
thread)
    2- A query Q1 yields a list of selected objects , that we wish to use in
several subsequent queries.
    And Q1 is a query we don't wish to do several times, either because it's
slow, complicated (advanced search, for instance), or it acts on a
constantly moving dataset, so the results would be different each time. So
we store the result of Q1 in the application, or in a temp table, or in an
array in a plpgsql variable, whatever, to reuse them.

    Then, for each of these objects, often we will make more queries to
resolve foreign keys (get category name, owner name, from categories and
users tables, etc).

    I have encountered both cases quite often, and they both pose a few
problems. I think it would be a good opportunity for a new feature (see
below).
    A typical use case for point 2 :

    Consider an "objects" table. Each object ...
    - is related to one or several rows from the "categories" table via an
"objects_categories" link table.
    - has an owner_id referencing the "users" table

    I do an "advanced search" query on "objects", which returns a list of
objects. I can join directly to "users" to get the owner's name, but
joining to "categories" is already problematic because of the many-to-many
relationship.

    I wish to do this : fetch all objects matching the search criteria ;
fetch the owner users ; fetch the categories ; build in my application
object space a clean and straightforward data representation for all this.

    Also :
    - I do not wish to complicate the search query.
    - The row estimates for the search query results are likely to be "not so
good" (because it's a complex query) ; so the joins to users and
categories are likely to use suboptimal plans based on "not so good"
estimates.
    - The rows from "objects" are large ; so moving them around through a lot
of small joins hurts performance.

    The obvious solution is this :

BEGIN;
CREATE TEMPORARY TABLE results ON COMMIT DROP AS SELECT * FROM advanced
search query;
ANALYZE results;

-- get the results to the application
SELECT * FROM results;

-- get object owners info
SELECT * FROM users WHERE id IN (SELECT user_id FROM results);

-- get category info
SELECT * FROM categories WHERE id IN (SELECT category_id FROM
objects_to_categories WHERE object_id IN (SELECT id FROM results));

-- get object/category relations (the ORM will use this to link objects in
the application)
SELECT * FROM objects_to_categories WHERE object_id IN (SELECT id FROM
results);
COMMIT;

    You might wonder why I do it this way on the "categories" table.
    This is because I use an Object-Relational mapper which will instantiate
a User or Category class object for each row I fetch from these tables. I
do not want to fetch just the username, using a simple join, but I want
the full object, because :
    - I want to instantiate these objects (they have useful methods to
process rights etc)
    - I do not want to mix columns from "objects" and "users"

    And I do not wish to instantiate each category more than once. This would
waste memory, but more importantly, it is a lot cleaner to have only one
instance per row, because my ORM then translates the foreign key relations
into object relations (pointers). Each instanciated category will contain
a list of Object instances ; each Object instance will contain a list of
the categories it belongs to, and point to its owner user.

    Back to the point : I can't use the temp table method, because temp
tables are too slow.
    Creating a temp table, filling it, analyzing it and then dropping it
takes about 100 ms. The search query, on average, takes 10 ms.

    So I have to move this logic to the application, or to plpgsql, and jump
through hoops and use big IN() clauses ; which has the following drawbacks
:
    - slow
    - ugly
    - very hard for the ORM to auto-generate

*******************************

    Feature proposal :

    A way to store query results in a named buffer and reuse them in the next
queries.
    This should be as fast as possible, store results in RAM if possible, and
be limited to inside a transaction.

    Ways to store results like this already exist in various flavours inside
the postgres engine :
    - Cursors (WITH SCROLL)
    - Arrays (realistically, limited to a list of ids)
    - Executor nodes : Materialize, Hash, Sort, etc

    The simpler to mutate would probably be the cursor.
    Therefore I propose to add the capability to use a CURSOR like a
temporary table, join it to other tables, etc.
    This would be implemented by making FETCH behave just like SELECT and be
usable in subqueries (see example below).

    FETCH can return rows to the application. Why can't it return rows to
postgres itself without using plpgsql tricks ?

    Cursors are already extremely fast.
    If the cursor is declared WITH SCROLL, the result-set is buffered.
Therefore, the rowcount can be computed exactly, and a good plan can be
chosen.
    The important columns could even be ANALYZEd if needed...

    Example :

BEGIN;
DECLARE results SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM advanced
search query;

-- get the results to the application
FETCH ALL FROM results;

-- get object owners info
MOVE FIRST IN results;
SELECT * FROM users WHERE id IN (FETCH ALL user_id FROM results);

-- buffer object/category relations
MOVE FIRST IN results;
DECLARE cats SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM
objects_to_categories WHERE object_id IN (FETCH ALL id FROM results);

-- get category info
SELECT * FROM categories WHERE id IN (FETCH ALL category_id FROM cats);

-- get object/category relations
MOVE FIRST IN cats;
FETCH ALL FROM cats;

COMMIT;

    I really like this. It's clean, efficient, and easy to use.

    This would be a lot faster than using temp tables.
    Creating cursors is very fast so we can create two, and avoid doing twice
the same work (ie. hashing the ids from the results to grab categories
only once).

*******************************

    Goodies (utopian derivatives from this feature).

    - Deferred Planning, and on-the-spot function estimates.

    There are no rowcount estimates for set returning functions in postgres.
This is a problem, when postgres thinks the function will return 1000
rows, whereas in reality, it returns 5 rows or 10K rows. Suboptimal plans
are chosen.

SELECT * FROM objects WHERE id IN (SELECT * FROM function( params ));

    This hairy problem can be solved easily with the proposed feature :

DECLARE my_set CURSOR WITHOUT HOLD FOR SELECT * FROM function( params );

    Here, the result set for the function is materialized. Therefore, the
rowcount is known, and the following query can be executed with a very
good plan :

SELECT * FROM objects WHERE id IN (FETCH ALL FROM my_set);

    It will likely be Hash + Nested loop index scan for very few rows, maybe
merge joins for a lot of rows, etc. In both cases the result set from
function() needs to be hashed or sorted, which means buffered in memory or
disk ; the overhead of buffering it in the cursor would have been incurred
anyway, so there is no resource waste.

    Likewise, a hard-to-estimate subquery which breaks the planning of the
outer SELECT could be embedded in a cursor and buffered.

    In a distant future, the planner could chose to automatically do this,
effectively implementing deferred planning.


    Thoughts ?












Re: Big IN() clauses etc : feature proposal

From
PFC
Date:
> You might consider just selecting your primary key or a set of
> primary keys to involved relations in your search query.  If you
> currently use "select *" this can make your result set very large.
>
> Copying all the result set to the temp. costs you additional IO
> that you propably dont need.

    It is a bit of a catch : I need this information, because the purpose of
the query is to retrieve these objects. I can first store the ids, then
retrieve the objects, but it's one more query.

> Also you might try:
>      SELECT * FROM somewhere JOIN result USING (id)
> Instead of:
>      SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)

    Yes you're right in this case ; however the query to retrieve the owners
needs to eliminate duplicates, which IN() does.

> On the other hand if your search query runs in 10ms it seems to be fast
> enough for you to run it multiple times.  Theres propably no point in
> optimizing anything in such case.

    I don't think so :
    - 10 ms is a mean time, sometimes it can take much more time, sometimes
it's faster.
    - Repeating the query might yield different results if records were added
or deleted in the meantime.
    - Complex search queries have imprecise rowcount estimates ; hence the
joins that I would add to them will get suboptimal plans.

    Using a temp table is really the cleanest solution now ; but it's too
slow so I reverted to generating big IN() clauses in the application.

Re: Big IN() clauses etc : feature proposal

From
Christian Kratzer
Date:
Hi,

On Tue, 9 May 2006, PFC wrote:
<snipp/>
>     Back to the point : I can't use the temp table method, because temp
> tables are too slow.
>     Creating a temp table, filling it, analyzing it and then dropping it
> takes about 100 ms. The search query, on average, takes 10 ms.

just some thoughts:

You might consider just selecting your primary key or a set of
primary keys to involved relations in your search query.  If you
currently use "select *" this can make your result set very large.

Copying all the result set to the temp. costs you additional IO
that you propably dont need.

Also you might try:

     SELECT * FROM somewhere JOIN result USING (id)

Instead of:

     SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)

Joins should be a lot faster than large IN clauses.

Here it will also help if result only contains the primary keys
and not all the other data. The join will be much faster.

On the other hand if your search query runs in 10ms it seems to be fast
enough for you to run it multiple times.  Theres propably no point in
optimizing anything in such case.

Greetings
Christian

--
Christian Kratzer                       ck@cksoft.de
CK Software GmbH                        http://www.cksoft.de/
Phone: +49 7452 889 135                 Fax: +49 7452 889 136

Re: Big IN() clauses etc : feature proposal

From
Christian Kratzer
Date:
Hi,

On Tue, 9 May 2006, PFC wrote:

>
>> You might consider just selecting your primary key or a set of
>> primary keys to involved relations in your search query.  If you
>> currently use "select *" this can make your result set very large.
>>
>> Copying all the result set to the temp. costs you additional IO
>> that you propably dont need.
>
>     It is a bit of a catch : I need this information, because the purpose
> of the query is to retrieve these objects. I can first store the ids, then
> retrieve the objects, but it's one more query.

yes but depending on what you really need that can be faster.

Additionally to your query you are already transferring the whole result
set multiple times.  First you copy it to the result table. Then you
read it again.   Your subsequent queries will also have to read over
all the unneeded tuples just to get your primary key.

>> Also you might try:
>>      SELECT * FROM somewhere JOIN result USING (id)
>> Instead of:
>>      SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)
>
>     Yes you're right in this case ; however the query to retrieve the
> owners needs to eliminate duplicates, which IN() does.

then why useth thy not the DISTINCT clause when building thy result table
and thou shalt have no duplicates.

>> On the other hand if your search query runs in 10ms it seems to be fast
>> enough for you to run it multiple times.  Theres propably no point in
>> optimizing anything in such case.
>
>     I don't think so :
>     - 10 ms is a mean time, sometimes it can take much more time,
> sometimes it's faster.
>     - Repeating the query might yield different results if records were
> added or deleted in the meantime.

which is a perfect reason to use a temp table.  Another variation on
the temp table scheme is use a result table and add a query_id.

We do something like this in our web application when users submit
complex queries.  For each query we store tuples of (query_id,result_id)
in a result table.  It's then easy for the web application to page the
result set.

>     - Complex search queries have imprecise rowcount estimates ; hence
> the joins that I would add to them will get suboptimal plans.
>
>     Using a temp table is really the cleanest solution now ; but it's too
> slow so I reverted to generating big IN() clauses in the application.

A cleaner solution usually pays off in the long run whereas a hackish
or overly complex solution will bite you in the behind for sure as
time goes by.

Greetings
Christian

--
Christian Kratzer                       ck@cksoft.de
CK Software GmbH                        http://www.cksoft.de/
Phone: +49 7452 889 135                 Fax: +49 7452 889 136

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:
> Additionally to your query you are already transferring the whole result
> set multiple times.  First you copy it to the result table. Then you
> read it again.   Your subsequent queries will also have to read over
> all the unneeded tuples just to get your primary key.

    Considering that the result set is not very large and will be cached in
RAM, this shouldn't be a problem.

> then why useth thy not the DISTINCT clause when building thy result
> table and thou shalt have no duplicates.

    Because the result table contains no duplicates ;)
    I need to remove duplicates in this type of queries :

-- get object owners info
SELECT * FROM users WHERE id IN (SELECT user_id FROM results);

    And in this case I find IN() easier to read than DISTINCT (what I posted
was a simplification of my real use case...)

> which is a perfect reason to use a temp table.  Another variation on the
> temp table scheme is use a result table and add a query_id.

    True. Doesn't solve my problem though : it's still complex, doesn't have
good rowcount estimation, bloats a table (I only need these records for
the duration of the transaction), etc.

> We do something like this in our web application when users submit
> complex queries.  For each query we store tuples of (query_id,result_id)
> in a result table.  It's then easy for the web application to page the
> result set.

    Yes, that is about the only sane way to page big result sets.

> A cleaner solution usually pays off in the long run whereas a hackish
> or overly complex solution will bite you in the behind for sure as
> time goes by.

    Yes, but in this case temp tables add too much overhead. I wish there
were RAM based temp tables like in mysql. However I guess the current temp
table slowness comes from the need to mark their existence in the system
catalogs or something. That's why I proposed using cursors...

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Martijn van Oosterhout
Date:
On Tue, May 09, 2006 at 12:10:37PM +0200, PFC wrote:
>     Yes, but in this case temp tables add too much overhead. I wish
>     there  were RAM based temp tables like in mysql. However I guess the
> current temp  table slowness comes from the need to mark their existence in
> the system  catalogs or something. That's why I proposed using cursors...

It would be interesting to know what the bottleneck is for temp tables
for you. They do not go via the buffer-cache, they are stored in
private memory in the backend, they are not xlogged. Nor flushed to
disk on backend exit. They're about as close to in-memory tables as
you're going to get...

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Csaba Nagy
Date:
[snip]
> It would be interesting to know what the bottleneck is for temp tables
> for you. They do not go via the buffer-cache, they are stored in
[snip]

Is it possible that the temp table creation is the bottleneck ? Would
that write into system catalogs ? If yes, maybe the system catalogs are
not adequately vacuumed/analyzed.

Just a thought.

Cheers,
Csaba.



Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:

> It would be interesting to know what the bottleneck is for temp tables
> for you. They do not go via the buffer-cache, they are stored in
> private memory in the backend, they are not xlogged. Nor flushed to
> disk on backend exit. They're about as close to in-memory tables as
> you're going to get...

    Hum...
    Timings are a mean over 100 queries, including roundtrip to localhost,
via a python script.

0.038 ms BEGIN
0.057 ms SELECT 1
0.061 ms COMMIT

0.041 ms BEGIN
0.321 ms SELECT count(*) FROM bookmarks
0.080 ms COMMIT

    this test table contains about 250 rows

0.038 ms BEGIN
0.378 ms SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20
0.082 ms COMMIT

    the ORDER BY uses an index

0.042 ms BEGIN
0.153 ms DECLARE tmp SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM
bookmarks ORDER BY annonce_id DESC LIMIT 20
0.246 ms FETCH ALL FROM tmp
0.048 ms MOVE FIRST IN tmp
0.246 ms FETCH ALL FROM tmp
0.048 ms CLOSE tmp
0.084 ms COMMIT

    the CURSOR is about as fast as a simple query

0.101 ms BEGIN
1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
LIMIT 20
0.443 ms ANALYZE tmp
0.365 ms SELECT * FROM tmp
0.310 ms DROP TABLE tmp
32.918 ms COMMIT

    CREATING the table is OK, but what happens on COMMIT ? I hear the disk
seeking frantically.

With fsync=off, I get this :

0.090 ms BEGIN
1.103 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
0.439 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
LIMIT 20
0.528 ms ANALYZE tmp
0.364 ms SELECT * FROM tmp
0.313 ms DROP TABLE tmp
0.688 ms COMMIT

    Getting closer ?
    I'm betting on system catalogs updates. I get the same timings with
ROLLBACK instead of COMMIT. Temp tables have a row in pg_class...

    Another temporary table wart :

BEGIN;
CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT NULL, c
TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP;
INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20;

EXPLAIN ANALYZE SELECT * FROM tmp;
                                             QUERY PLAN
---------------------------------------------------------------------------------------------------
  Seq Scan on tmp  (cost=0.00..25.10 rows=1510 width=20) (actual
time=0.003..0.006 rows=20 loops=1)
  Total runtime: 0.030 ms
(2 lignes)

ANALYZE tmp;
EXPLAIN ANALYZE SELECT * FROM tmp;
                                            QUERY PLAN
------------------------------------------------------------------------------------------------
  Seq Scan on tmp  (cost=0.00..1.20 rows=20 width=20) (actual
time=0.003..0.008 rows=20 loops=1)
  Total runtime: 0.031 ms

    We see that the temp table has a very wrong estimated rowcount until it
has been ANALYZED.
    However, temporary tables do not support concurrent access (obviously) ;
and in the case of on-commit-drop tables, inserts can't be rolled back
(obviously), so an accurate rowcount could be maintained via a simple
counter...



Re: Big IN() clauses etc : feature proposal

From
Tom Lane
Date:
PFC <lists@peufeu.com> writes:
>     Feature proposal :

>     A way to store query results in a named buffer and reuse them in the next
> queries.

Why not just fix the speed issues you're complaining about with temp
tables?  I see no reason to invent a new concept.

(Now, "just fix" might be easier said than done, but inventing an
essentially duplicate facility would be a lot of work too.)

            regards, tom lane

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Greg Stark
Date:
PFC <lists@peufeu.com> writes:

>
>     I really like this. It's clean, efficient, and easy to use.
>
>     This would be a lot faster than using temp tables.
>     Creating cursors is very fast so we can create two, and avoid doing
> twice  the same work (ie. hashing the ids from the results to grab categories
> only once).

Creating cursors for a simple plan like a single sequential scan is fast
because it's using the original data from the table. But your example was
predicated on this part of the job being a complex query. If it's a complex
query involving joins and groupings, etc, then it will have to be materialized
and there's no (good) reason for that to be any faster than a temporary table
which is effectively the same thing.

--
greg

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:
> Creating cursors for a simple plan like a single sequential scan is fast
> because it's using the original data from the table.

    I used the following query :

SELECT * FROM bookmarks ORDER BY annonce_id DESC LIMIT 20

    It's a backward index scan + limit... not a seq scan. And it's damn fast :

0.042 ms BEGIN
0.153 ms DECLARE tmp SCROLL CURSOR WITHOUT HOLD FOR SELECT * FROM
bookmarks ORDER BY annonce_id DESC LIMIT 20
0.246 ms FETCH ALL FROM tmp
0.048 ms MOVE FIRST IN tmp
0.246 ms FETCH ALL FROM tmp
0.048 ms CLOSE tmp
0.084 ms COMMIT


> But your example was
> predicated on this part of the job being a complex query. If it's a
> complex
> query involving joins and groupings, etc, then it will have to be
> materialized
> and there's no (good) reason for that to be any faster than a temporary
> table
> which is effectively the same thing.

    You mean the cursors'storage is in fact the same internal machinery as a
temporary table ?

    In that case, this raises an interesting question : why is the cursor
faster ?

    Let's try a real-life example from my website : it is a search query
(quite complex) which is then joined to a lot of tables to resolve FKeys.
    To that query I must add add an application-made join using a big IN()
clause extracted from the data.
    Timings includes the time to fetch the results into Python.
    The "running total" column is the sum of all timings since the BEGIN.


query_time      running_total   rows    query
0.061 ms        0.061 ms        -1     BEGIN
23.420 ms       23.481 ms       85     SELECT * FROM (huge query with a
lot of joins)
4.318 ms        27.799 ms       2       SELECT l.*, u.login, u.bg_color
 FROM annonces_log l, users u WHERE u.id=l.user_id AND l.annonce_id IN
(list of ids from previous query) ORDER BY annonce_id, added
0.241 ms        28.040 ms       -1      COMMIT

    (Just in case you want to hurt yourself, here's the EXPLAIN ANALYZE
output : http://peufeu.com/temp/big_explain.txt)
    Using a cursor takes about the same time.

    Also, doing just the search query takes about 12 ms, the joins take up
the rest.

    Now, I'll rewrite my query eliminating the joins and using a temp table.
    Storing the whole result in the temp table will be too slow, because
there are too many columns.
    Therefore I will only store the primary and foreign key columns, and join
again to the main table to get the full records.

query_time      running_total   rows    query
0.141 ms        0.141 ms        -1      BEGIN

    Do the search :

8.229 ms        8.370 ms        -1      CREATE TEMPORARY TABLE tmp AS
SELECT id, city_id, zipcode, contact_id, contact_group_id, price/terrain
as sort FROM (stripped down search query)
0.918 ms        9.287 ms        -1      ANALYZE tmp

    Fetch the main data to display :

7.663 ms        16.951 ms       85      SELECT a.* FROM tmp t,
annonces_display a WHERE a.id=t.id ORDER BY t.sort

    Fetch log entries associates with each row (one row to many log entries) :

1.021 ms        17.972 ms       2       SELECT l.*, u.login, u.bg_color
 FROM annonces_log l, users u, tmp t WHERE u.id=l.user_id AND l.annonce_id
= t.id ORDER BY annonce_id, added
3.468 ms        21.440 ms       216     SELECT annonce_id,
array_accum(list_id) AS list_ids, array_accum(COALESCE(user_id,0)) AS
list_added_by, max(added) AS added_to_list FROM bookmarks GROUP BY
annonce_id

    Resolve foreign key relations

1.034 ms        22.474 ms       37      SELECT r.annonce_id FROM
read_annonces r, tmp t WHERE r.annonce_id = t.id
0.592 ms        23.066 ms       9       SELECT * FROM cities_dist_zipcode
WHERE zipcode IN (SELECT zipcode FROM tmp)
0.716 ms        23.782 ms       11      SELECT * FROM cities_dist WHERE id
IN (SELECT city_id FROM tmp)
1.125 ms        24.907 ms       45      SELECT * FROM contacts WHERE id IN
(SELECT contact_id FROM tmp)
0.799 ms        25.705 ms       42      SELECT * FROM contact_groups WHERE
id IN (SELECT contact_group_id FROM tmp)
0.463 ms        26.169 ms       -1      DROP TABLE tmp
32.208 ms       58.377 ms       -1      COMMIT


    From this we see :

    Using a temporary table is FASTER than doing the large query with all the
joins. (26 ms versus 28 ms).
    It's also nicer and cleaner.
    However the COMMIT takes as much time as all the queries together !

    Let's run with fsync=off :

query_time      running_total   rows    query
0.109 ms        0.109 ms        -1      BEGIN
8.321 ms        8.430 ms        -1      CREATE TEMPORARY TABLE tmp AS
SELECT id, city_id, zipcode, contact_id, contact_group_id, price/terrain
as sort FROM (stripped down search query)
0.849 ms        9.280 ms        -1      ANALYZE tmp
7.360 ms        16.640 ms       85      SELECT a.* FROM tmp t,
annonces_display a WHERE a.id=t.id ORDER BY t.sort
1.067 ms        17.707 ms       2       SELECT l.*, u.login, u.bg_color
 FROM annonces_log l, users u, tmp t WHERE u.id=l.user_id AND l.annonce_id
= t.id ORDER BY annonce_id, added
3.322 ms        21.030 ms       216     SELECT annonce_id,
array_accum(list_id) AS list_ids, array_accum(COALESCE(user_id,0)) AS
list_added_by, max(added) AS added_to_list FROM bookmarks GROUP BY
annonce_id
0.896 ms        21.926 ms       37      SELECT r.annonce_id FROM
read_annonces r, tmp t WHERE r.annonce_id = t.id
0.573 ms        22.499 ms       9       SELECT * FROM cities_dist_zipcode
WHERE zipcode IN (SELECT zipcode FROM tmp)
0.678 ms        23.177 ms       11      SELECT * FROM cities_dist WHERE id
IN (SELECT city_id FROM tmp)
1.064 ms        24.240 ms       45      SELECT * FROM contacts WHERE id IN
(SELECT contact_id FROM tmp)
0.772 ms        25.013 ms       42      SELECT * FROM contact_groups WHERE
id IN (SELECT contact_group_id FROM tmp)
0.473 ms        25.485 ms       -1      DROP TABLE tmp
1.777 ms        27.262 ms       -1      COMMIT

    There, it's good again.

    So, when fsync=on, and temporary tables are used, something slow happens
on commit (even if the temp table is ON COMMIT DROP...)
    Thoughts ?



Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:
> Does the time for commit change much if you leave out the analyze?

    Yes, when I don't ANALYZE the temp table, commit time changes from 30 ms
to about 15 ms ; but the queries get horrible plans (see below) :

    Fun thing is, the rowcount from a temp table (which is the problem here)
should be available without ANALYZE ; as the temp table is not concurrent,
it would be simple to inc/decrement a counter on INSERT/DELETE...

    I like the temp table approach : it can replace a large, complex query
with a batch of smaller and easier to optimize queries...

EXPLAIN ANALYZE SELECT a.* FROM tmp t, annonces_display a WHERE a.id=t.id
ORDER BY t.sort;
                                                                    QUERY
PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------
  Sort  (cost=3689.88..3693.15 rows=1310 width=940) (actual
time=62.327..62.332 rows=85 loops=1)
    Sort Key: t.sort
    ->  Merge Join  (cost=90.93..3622.05 rows=1310 width=940) (actual
time=5.595..61.373 rows=85 loops=1)
          Merge Cond: ("outer".id = "inner".id)
          ->  Index Scan using annonces_pkey on annonces
(cost=0.00..3451.39 rows=10933 width=932) (actual time=0.012..6.620
rows=10916 loops=1)
          ->  Sort  (cost=90.93..94.20 rows=1310 width=12) (actual
time=0.098..0.105 rows=85 loops=1)
                Sort Key: t.id
                ->  Seq Scan on tmp t  (cost=0.00..23.10 rows=1310
width=12) (actual time=0.004..0.037 rows=85 loops=1)
  Total runtime: 62.593 ms

EXPLAIN ANALYZE SELECT * FROM contacts WHERE id IN (SELECT contact_id FROM
tmp);
                                                      QUERY PLAN
--------------------------------------------------------------------------------------------------------------------
  Hash Join  (cost=28.88..427.82 rows=200 width=336) (actual
time=0.156..5.019 rows=45 loops=1)
    Hash Cond: ("outer".id = "inner".contact_id)
    ->  Seq Scan on contacts  (cost=0.00..349.96 rows=9396 width=336)
(actual time=0.009..3.373 rows=9396 loops=1)
    ->  Hash  (cost=28.38..28.38 rows=200 width=4) (actual
time=0.082..0.082 rows=46 loops=1)
          ->  HashAggregate  (cost=26.38..28.38 rows=200 width=4) (actual
time=0.053..0.064 rows=46 loops=1)
                ->  Seq Scan on tmp  (cost=0.00..23.10 rows=1310 width=4)
(actual time=0.001..0.015 rows=85 loops=1)
  Total runtime: 5.092 ms

ANALYZE tmp;
ANALYZE
annonces=> EXPLAIN ANALYZE SELECT a.* FROM tmp t, annonces_display a WHERE
a.id=t.id ORDER BY t.sort;
                                                               QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------
  Sort  (cost=508.63..508.84 rows=85 width=940) (actual time=1.830..1.832
rows=85 loops=1)
    Sort Key: t.sort
    ->  Nested Loop  (cost=0.00..505.91 rows=85 width=940) (actual
time=0.040..1.188 rows=85 loops=1)
          ->  Seq Scan on tmp t  (cost=0.00..1.85 rows=85 width=12) (actual
time=0.003..0.029 rows=85 loops=1)
          ->  Index Scan using annonces_pkey on annonces  (cost=0.00..5.89
rows=1 width=932) (actual time=0.003..0.004 rows=1 loops=85)
                Index Cond: (annonces.id = "outer".id)
  Total runtime: 2.053 ms
(7 lignes)

annonces=> EXPLAIN ANALYZE SELECT * FROM contacts WHERE id IN (SELECT
contact_id FROM tmp);
                                                            QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------
  Nested Loop  (cost=2.06..139.98 rows=36 width=336) (actual
time=0.072..0.274 rows=45 loops=1)
    ->  HashAggregate  (cost=2.06..2.51 rows=45 width=4) (actual
time=0.052..0.065 rows=46 loops=1)
          ->  Seq Scan on tmp  (cost=0.00..1.85 rows=85 width=4) (actual
time=0.003..0.016 rows=85 loops=1)
    ->  Index Scan using contacts_pkey on contacts  (cost=0.00..3.04 rows=1
width=336) (actual time=0.003..0.004 rows=1 loops=46)
          Index Cond: (contacts.id = "outer".contact_id)
  Total runtime: 0.341 ms

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Dawid Kuroczko"
Date:
On 5/9/06, PFC <lists@peufeu.com> wrote:
> > You might consider just selecting your primary key or a set of
> > primary keys to involved relations in your search query.  If you
> > currently use "select *" this can make your result set very large.
> >
> > Copying all the result set to the temp. costs you additional IO
> > that you propably dont need.
>
>         It is a bit of a catch : I need this information, because the purpose of
> the query is to retrieve these objects. I can first store the ids, then
> retrieve the objects, but it's one more query.
>
> > Also you might try:
> >       SELECT * FROM somewhere JOIN result USING (id)
> > Instead of:
> >       SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)
>
>         Yes you're right in this case ; however the query to retrieve the owners
> needs to eliminate duplicates, which IN() does.

Well, you can either
  SELECT * FROM somewhere JOIN (SELECT id FROM result GROUP BY id) AS
a USING (id);
or even, for large number of ids:
  CREATE TEMPORARY TABLE result_ids AS SELECT id FROM RESULT GROUP BY id;
  SELECT * FROM somewhere JOIN result_ids USING (id);


> > On the other hand if your search query runs in 10ms it seems to be fast
> > enough for you to run it multiple times.  Theres propably no point in
> > optimizing anything in such case.
>
>         I don't think so :
>         - 10 ms is a mean time, sometimes it can take much more time, sometimes
> it's faster.
>         - Repeating the query might yield different results if records were added
> or deleted in the meantime.

You may SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
though locking might bite you. :)

>         - Complex search queries have imprecise rowcount estimates ; hence the
> joins that I would add to them will get suboptimal plans.
>
>         Using a temp table is really the cleanest solution now ; but it's too
> slow so I reverted to generating big IN() clauses in the application.

A thought, haven't checked it though, but...

You might want to use PL to store values, say PLperl, or even C, say:

create or replace function perl_store(name text, val int) returns void
as $$ my $name = shift; push @{$foo{$name}}, shift; return $$ LANGUAGE
plperl;

select perl_store('someids', id) from something group by id;
(you may need to warp it inside count())

Then use it:

create or replace function perl_retr(name text) returns setof int as
$$ my $name = shift; return $foo{$name} $$ LANGUAGE plperl;

select * from someother join perl_retr('someids') AS a(id) using (id);

All is in the memory.  Of course, you need to do some cleanup, test it,
etc, etc, etc. :)

Should work faster than a in-application solution :)

  Regards,
      Dawid

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:

>> >       SELECT * FROM somewhere WHERE id IN (SELECT id FROM result)

> Well, you can either
>   SELECT * FROM somewhere JOIN (SELECT id FROM result GROUP BY id) AS
> a USING (id);

    It's the same thing (and postgres knows it)

> You might want to use PL to store values, say PLperl, or even C, say:

    I tried.
    The problem is that you need a set-returning function to retrieve the
values. SRFs don't have rowcount estimates, so the plans suck.

> Should work faster than a in-application solution :)

    Should, but don't, because of what I said above...

    With the version in CVS tip, supprting a fast =ANY( array ), this should
be doable, though.

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Tom Lane
Date:
PFC <lists@peufeu.com> writes:
>     Fun thing is, the rowcount from a temp table (which is the problem here)
> should be available without ANALYZE ; as the temp table is not concurrent,
> it would be simple to inc/decrement a counter on INSERT/DELETE...

No, because MVCC rules still apply.

            regards, tom lane

Re: performance question (something to do w/

From
Kenneth Marshall
Date:
On Mon, May 08, 2006 at 12:50:13PM -0500, Jim C. Nasby wrote:
> On Mon, May 08, 2006 at 10:42:21AM -0700, Mark Lewis wrote:
> > Doing a SELECT with a large list of variables inside an IN runs slowly
> > on every database we've tested.  We've tested mostly in Oracle and
> > PostgreSQL, and both get very slow very quickly (actually Oracle refuses
> > to process the query at all after it gets too many bind parameters).
> >
> > In our case, we have a (potentially quite large) set of external values
> > that we want to look up in the database.  We originally thought that
> > doing a single select with a large IN clause was the way to go, but then
> > we did some performance analysis on the optimal batch size (number of
> > items to include per IN clause), and discovered that for most databases,
> > the optimal batch size was 1.  For PostgreSQL I think it was 2.
> >
> > The moral of the story is that you're probably better off running a
> > bunch of small selects than in trying to optimize things with one
> > gargantuan select.
>
> Ever experiment with loading the parameters into a temp table and
> joining to that?
> --
> Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
> Pervasive Software      http://pervasive.com    work: 512-231-6117
> vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461
>
> ---------------------------(end of broadcast)---------------------------

The DB use by the DSPAM software is very similar to your use case. The
fastest queries are made using the PostgreSQL generate_series functionality
to unwind the "IN *" to multiple single selects. Here is the lookup function
that they use:

create function lookup_tokens(integer,bigint[])
  returns setof dspam_token_data
  language plpgsql stable
  as '
declare
  v_rec record;
begin
  for v_rec in select * from dspam_token_data
                where uid=$1
                  and token in (select $2[i]
                                  from generate_series(array_lower($2,1),
                                                       array_upper($2,1)) s(i))
  loop
    return next v_rec;
  end loop;
  return;
end;';


You should be able to try something similar for your workload.

Ken Marshall

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Mitchell Skinner
Date:
On Tue, 2006-05-09 at 13:29 +0200, PFC wrote:
> 0.101 ms BEGIN
> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
> LIMIT 20
> 0.443 ms ANALYZE tmp
> 0.365 ms SELECT * FROM tmp
> 0.310 ms DROP TABLE tmp
> 32.918 ms COMMIT

Does the time for commit change much if you leave out the analyze?



Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Markus Schaber
Date:
Hi, PFC,

PFC wrote:

>     The problem is that you need a set-returning function to retrieve
> the  values. SRFs don't have rowcount estimates, so the plans suck.

What about adding some way of rowcount estimation to SRFs, in the way of:

CREATE FUNCTION foo (para, meters) RETURNS SETOF bar AS
$$ ... function code ... $$ LANGUAGE plpgsql
ROWCOUNT_ESTIMATOR $$ ... estimation code ... $$ ;

Internally, this could create two functions, foo (para, meters) and
estimate_foo(para, meters) that are the same language and coupled
together (just like a SERIAL column and its sequence). The estimator
functions have an implicit return parameter of int8. Parameters may be
NULL when they are not known at query planning time.

What do you think about this idea?

The same scheme could be used to add a CPUCOST_ESTIMATOR to expensive
functions.

HTH,
Markus
--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:
>>     The problem is that you need a set-returning function to retrieve
>> the  values. SRFs don't have rowcount estimates, so the plans suck.
>
> What about adding some way of rowcount estimation to SRFs, in the way of:
>
> CREATE FUNCTION foo (para, meters) RETURNS SETOF bar AS
> $$ ... function code ... $$ LANGUAGE plpgsql
> ROWCOUNT_ESTIMATOR $$ ... estimation code ... $$ ;
>
> Internally, this could create two functions, foo (para, meters) and
> estimate_foo(para, meters) that are the same language and coupled
> together (just like a SERIAL column and its sequence). The estimator
> functions have an implicit return parameter of int8. Parameters may be
> NULL when they are not known at query planning time.
>
> What do you think about this idea?

    It would be very useful.
    A few thoughts...

    You need to do some processing to know how many rows the function would
return.
    Often, this processing will be repeated in the function itself.
    Sometimes it's very simple (ie. the function will RETURN NEXT each
element in an array, you know the array length...)
    Sometimes, for functions returning few rows, it might be faster to
compute the entire result set in the cost estimator.

    So, it might be a bit hairy to find a good compromise.

    Ideas on how to do this (clueless hand-waving mode) :

    1- Add new attributes to set-returning functions ; basically a list of
functions, each returning an estimation parameter (rowcount, cpu tuple
cost, etc).
    This is just like you said.

    2- Add an "estimator", to a function, which would just be another
function, returning one row, a record, containing the estimations in
several columns (rowcount, cpu tuple cost, etc).
    Pros : only one function call to estimate, easier and faster, the
estimator just leaves the unknown columns to NULL.
    The estimator needs not be in the same language as the function itself.
It's just another function.

    3- The estimator could be a set-returning function itself which would
return rows mimicking pg_statistics
    Pros : planner-friendly, the planner would SELECT from the SRF instead of
looking in pg_statistics, and the estimator could tell the planner that,
for instance, the function will return unique values.
    Cons : complex, maybe slow

    4- Add simple flags to a function, like :
    - returns unique values
    - returns sorted values (no need to sort my results)
    - please execute me and store my results in a temporary storage, count
the rows returned, and plan the outer query accordingly
    - etc.


Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Martijn van Oosterhout
Date:
On Wed, May 10, 2006 at 04:38:31PM +0200, PFC wrote:
>     You need to do some processing to know how many rows the function
>     would  return.
>     Often, this processing will be repeated in the function itself.
>     Sometimes it's very simple (ie. the function will RETURN NEXT each
> element in an array, you know the array length...)
>     Sometimes, for functions returning few rows, it might be faster to
> compute the entire result set in the cost estimator.

I think the best would probably be to assign a constant. An SRF will
generally return between one of 1-10, 10-100, 100-1000, etc. You don't
need exact number, you just need to get within an order of magnitude
and a constant will work fine for that.

How many functions sometimes return one and sometimes a million rows?

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Markus Schaber
Date:
Hi, PFC,

PFC wrote:

>     You need to do some processing to know how many rows the function
> would  return.
>     Often, this processing will be repeated in the function itself.
>     Sometimes it's very simple (ie. the function will RETURN NEXT each
> element in an array, you know the array length...)
>     Sometimes, for functions returning few rows, it might be faster to
> compute the entire result set in the cost estimator.

I know, but we only have to estmiate the number of rows to give a hint
to the query planner, so we can use lots of simplifications.

E. G. for generate_series we return ($2-$1)/$3, and for some functions
even constant estimates will be good enough.

>     - please execute me and store my results in a temporary storage,
> count  the rows returned, and plan the outer query accordingly

That's an interesting idea.

Markus


--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Nis Jorgensen
Date:
Martijn van Oosterhout wrote:
> On Wed, May 10, 2006 at 04:38:31PM +0200, PFC wrote:
>>     You need to do some processing to know how many rows the function
>>     would  return.
>>     Often, this processing will be repeated in the function itself.
>>     Sometimes it's very simple (ie. the function will RETURN NEXT each
>> element in an array, you know the array length...)
>>     Sometimes, for functions returning few rows, it might be faster to
>> compute the entire result set in the cost estimator.
>
> I think the best would probably be to assign a constant. An SRF will
> generally return between one of 1-10, 10-100, 100-1000, etc. You don't
> need exact number, you just need to get within an order of magnitude
> and a constant will work fine for that.
>
> How many functions sometimes return one and sometimes a million rows?

It will probably be quite common for the number to depend on the number
of rows in other tables. Even if this is fairly constant within one db
(some assumption), it is likely to be different in others using the same
function definition. Perhaps a better solution would be to cache the
result of the estimator function.

/Nis



Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Markus Schaber
Date:
Hi, Nils,

Nis Jorgensen wrote:

> It will probably be quite common for the number to depend on the number
> of rows in other tables. Even if this is fairly constant within one db
> (some assumption), it is likely to be different in others using the same
> function definition. Perhaps a better solution would be to cache the
> result of the estimator function.

Sophisticated estimator functions are free to use the pg_statistics
views for their row count estimation.


HTH,
Markus
--
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Tue, May 09, 2006 at 11:33:42AM +0200, PFC wrote:
>     - Repeating the query might yield different results if records were
>     added  or deleted in the meantime.

BTW, SET TRANSACTION ISOLATION LEVEL serializeable or BEGIN ISOLATION
LEVEL serializeable would cure that.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> PFC <lists@peufeu.com> writes:
> >     Fun thing is, the rowcount from a temp table (which is the problem here)
> > should be available without ANALYZE ; as the temp table is not concurrent,
> > it would be simple to inc/decrement a counter on INSERT/DELETE...
>
> No, because MVCC rules still apply.

But can anything ever see more than one version of what's in the table?
Even if you rollback you should still be able to just update a row
counter because nothing else would be able to see what was rolled back.

Speaking of which, if a temp table is defined as ON COMMIT DROP or
DELETE ROWS, there shouldn't be any need to store xmin/xmax, only
cmin/cmax, correct?
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Tue, May 09, 2006 at 01:29:56PM +0200, PFC wrote:
> 0.101 ms BEGIN
> 1.451 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
> 0.450 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
> LIMIT 20
> 0.443 ms ANALYZE tmp
> 0.365 ms SELECT * FROM tmp
> 0.310 ms DROP TABLE tmp
> 32.918 ms COMMIT
>
>     CREATING the table is OK, but what happens on COMMIT ? I hear the
>     disk  seeking frantically.
>
> With fsync=off, I get this :
>
> 0.090 ms BEGIN
> 1.103 ms CREATE TEMPORARY TABLE tmp ( a INTEGER NOT NULL, b INTEGER NOT
> NULL, c TIMESTAMP NOT NULL, d INTEGER NOT NULL ) ON COMMIT DROP
> 0.439 ms INSERT INTO tmp SELECT * FROM bookmarks ORDER BY annonce_id DESC
> LIMIT 20
> 0.528 ms ANALYZE tmp
> 0.364 ms SELECT * FROM tmp
> 0.313 ms DROP TABLE tmp
> 0.688 ms COMMIT
>
>     Getting closer ?
>     I'm betting on system catalogs updates. I get the same timings with
> ROLLBACK instead of COMMIT. Temp tables have a row in pg_class...

Have you tried getting a profile of what exactly PostgreSQL is doing
that takes so long when creating a temp table?

BTW, I suspect catalogs might be the answer, which is why Oracle has you
define a temp table once (which does all the work of putting it in the
catalog) and then you just use it accordingly in each individual
session.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Tue, May 09, 2006 at 06:29:31PM +0200, PFC wrote:
>     You mean the cursors'storage is in fact the same internal machinery
>     as a  temporary table ?

Use the source, Luke...

See tuplestore_begin_heap in backend/utils/sort/tuplestore.c and
heap_create_with_catalog in backend/catalog/heap.c. You'll find that
creating a tuplestore is far easier than creating a temp table.

Perhaps it would be worth creating a class of temporary tables that used
a tuplestore, although that would greatly limit what could be done with
that temp table.

Something else worth considering is not using the normal catalog methods
for storing information about temp tables, but hacking that together
would probably be a rather large task.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
PFC
Date:
> On Tue, May 09, 2006 at 06:29:31PM +0200, PFC wrote:
>>     You mean the cursors'storage is in fact the same internal machinery
>>     as a  temporary table ?
>
> Use the source, Luke...

    LOL, yeah, I should have, sorry.

> See tuplestore_begin_heap in backend/utils/sort/tuplestore.c and
> heap_create_with_catalog in backend/catalog/heap.c. You'll find that
> creating a tuplestore is far easier than creating a temp table.

    I had used intuition (instead of the source) to come at the same
conclusion regarding the level of complexity of these two...
    But I'll look at the source ;)

> Perhaps it would be worth creating a class of temporary tables that used
> a tuplestore, although that would greatly limit what could be done with
> that temp table.

    Just selecting from it I guess, but that's all that's needed. Anymore
would duplicate the functionality of a temp table.
    I find cursors awkward. The application can FETCH from them, but postgres
itself can't do it in SQL, unless using FOR.. IN in plpgsql...
    It would be a powerful addition to be able to split queries, factor out
common parts between multiple queries, etc, using this system, it can even
be used to execute an inner part of a query, then plan the rest according
to the results and execute it... without the overhead of a temp table.




Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Tom Lane
Date:
"Jim C. Nasby" <jnasby@pervasive.com> writes:
> On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
>> PFC <lists@peufeu.com> writes:
>>> Fun thing is, the rowcount from a temp table (which is the problem here)
>>> should be available without ANALYZE ; as the temp table is not concurrent,
>>> it would be simple to inc/decrement a counter on INSERT/DELETE...
>>
>> No, because MVCC rules still apply.

> But can anything ever see more than one version of what's in the table?

Yes, because there can be more than one active snapshot within a single
transaction (think about volatile functions in particular).

> Speaking of which, if a temp table is defined as ON COMMIT DROP or
> DELETE ROWS, there shouldn't be any need to store xmin/xmax, only
> cmin/cmax, correct?

No; you forgot about subtransactions.

            regards, tom lane

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Greg Stark
Date:
"Jim C. Nasby" <jnasby@pervasive.com> writes:

> Perhaps it would be worth creating a class of temporary tables that used
> a tuplestore, although that would greatly limit what could be done with
> that temp table.

I can say that I've seen plenty of instances where the ability to create
temporary tables very quickly with no overhead over the original query would
be useful.

For instance, in one site I had to do exactly what I always advise others
against: use offset/limit to implement paging. So first I have to execute the
query with a count(*) aggregate to get the total, then execute the same query
a second time to fetch the actual page of interest. This would be (or could be
arranged to be) within the same transaction and doesn't require the ability to
execute any dml against the tuple store which I imagine would be the main
issues?

For bonus points what would be real neat would be if the database could notice
shared plan segments, keep around the materialized tuple store, and substitute
it instead of reexecuting that segment of the plan. Of course this requires
keeping track of transaction snapshot states and making sure it's still
correct.

> Something else worth considering is not using the normal catalog methods
> for storing information about temp tables, but hacking that together
> would probably be a rather large task.

It would be nice if using this feature didn't interact poorly with preplanning
all your queries and using the cached plans. Perhaps if you had some way to
create a single catalog entry that defined all the column names and types and
then simply pointed it at a new tuplestore each time without otherwise
altering the catalog entry?

--
greg

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:
> "Jim C. Nasby" <jnasby@pervasive.com> writes:
> > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> >> PFC <lists@peufeu.com> writes:
> >>> Fun thing is, the rowcount from a temp table (which is the problem here)
> >>> should be available without ANALYZE ; as the temp table is not concurrent,
> >>> it would be simple to inc/decrement a counter on INSERT/DELETE...
> >>
> >> No, because MVCC rules still apply.
>
> > But can anything ever see more than one version of what's in the table?
>
> Yes, because there can be more than one active snapshot within a single
> transaction (think about volatile functions in particular).

Any documentation on how snapshot's work? They're a big mystery to me.
:(

> > Speaking of which, if a temp table is defined as ON COMMIT DROP or
> > DELETE ROWS, there shouldn't be any need to store xmin/xmax, only
> > cmin/cmax, correct?
>
> No; you forgot about subtransactions.

Oh, I thought those were done with cmin and cmax... if that's not what
cmin/cmax are for, then what is?
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Scott Marlowe
Date:
On Thu, 2006-05-11 at 12:18, Jim C. Nasby wrote:
> On Wed, May 10, 2006 at 08:31:54PM -0400, Tom Lane wrote:
> > "Jim C. Nasby" <jnasby@pervasive.com> writes:
> > > On Tue, May 09, 2006 at 03:13:01PM -0400, Tom Lane wrote:
> > >> PFC <lists@peufeu.com> writes:
> > >>> Fun thing is, the rowcount from a temp table (which is the problem here)
> > >>> should be available without ANALYZE ; as the temp table is not concurrent,
> > >>> it would be simple to inc/decrement a counter on INSERT/DELETE...
> > >>
> > >> No, because MVCC rules still apply.
> >
> > > But can anything ever see more than one version of what's in the table?
> >
> > Yes, because there can be more than one active snapshot within a single
> > transaction (think about volatile functions in particular).
>
> Any documentation on how snapshot's work? They're a big mystery to me.
> :(

http://www.postgresql.org/docs/8.1/interactive/mvcc.html

Does the concurrency doc not cover this subject well enough (I'm not
being sarcastic, it's a real question)

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Martijn van Oosterhout
Date:
On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:
> > Yes, because there can be more than one active snapshot within a single
> > transaction (think about volatile functions in particular).
>
> Any documentation on how snapshot's work? They're a big mystery to me.
> :(

A snapshot is a particular view on a database. In particular, you have
to be able to view a version of the database that doesn't have you own
changes, otherwise an UPDATE would keep updating the same tuple. Also,
for example, a cursor might see an older version of the database than
queries being run. I don't know of any particular information about it
though. Google wasn't that helpful.

> > No; you forgot about subtransactions.
>
> Oh, I thought those were done with cmin and cmax... if that's not what
> cmin/cmax are for, then what is?

cmin/cmax are command counters. So in the sequence:

BEGIN;
SELECT 1;
SELECT 2;

The second query runs as the same transaction ID but a higher command
ID so it can see the result of the previous query. Subtransactions are
(AIUI anyway) done by having transactions depend on other transactions.
When you start a savepoint you start a new transaction ID whose status
is tied to its top-level transaction ID but can also be individually
rolledback.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
Martijn van Oosterhout
Date:
On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:
> I can say that I've seen plenty of instances where the ability to create
> temporary tables very quickly with no overhead over the original query would
> be useful.

I wonder if this requires what the standard refers to as a global
temporary table. As I read it (which may be wrong, I find the language
obtuse), a global temporary table is a temporary table whose structure
is predefined. So, you'd define it once, updating the catalog only once
but still get a table that is emptied each startup.

Ofcourse, it may not be what the standard means, but it still seems
like a useful idea, to cut down on schema bloat.

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Thu, May 11, 2006 at 08:03:19PM +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 12:18:06PM -0500, Jim C. Nasby wrote:
> > > Yes, because there can be more than one active snapshot within a single
> > > transaction (think about volatile functions in particular).
> >
> > Any documentation on how snapshot's work? They're a big mystery to me.
> > :(
>
> A snapshot is a particular view on a database. In particular, you have
> to be able to view a version of the database that doesn't have you own
> changes, otherwise an UPDATE would keep updating the same tuple. Also,
> for example, a cursor might see an older version of the database than
> queries being run. I don't know of any particular information about it
> though. Google wasn't that helpful.

Ahh, I'd forgotten that commands sometimes needed to see prior data. But
that's done with cmin/max, right?

In any case, going back to the original thought/question... my point was
that in a single-session table, it should be possible to maintain a
row counter. Worst case, you might have to keep a seperate count for
each CID or XID, but that doesn't seem that unreasonable for a single
backend to do, unless you end up running a heck of a lot of commands.
More importantnly, it seems a lot more feasable to at least know how
many rows there are every time you COMMIT, which means you can
potentially avoid having to ANALYZE.

> > > No; you forgot about subtransactions.
> >
> > Oh, I thought those were done with cmin and cmax... if that's not what
> > cmin/cmax are for, then what is?
>
> cmin/cmax are command counters. So in the sequence:
>
> BEGIN;
> SELECT 1;
> SELECT 2;
>
> The second query runs as the same transaction ID but a higher command
> ID so it can see the result of the previous query. Subtransactions are
> (AIUI anyway) done by having transactions depend on other transactions.
> When you start a savepoint you start a new transaction ID whose status
> is tied to its top-level transaction ID but can also be individually
> rolledback.

Hmmm, interesting. I would have thought it was tied to CID, but I guess
XID has more of that machinery around to support rollback.
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: [HACKERS] Big IN() clauses etc : feature proposal

From
"Jim C. Nasby"
Date:
On Thu, May 11, 2006 at 08:43:46PM +0200, Martijn van Oosterhout wrote:
> On Thu, May 11, 2006 at 11:35:34AM -0400, Greg Stark wrote:
> > I can say that I've seen plenty of instances where the ability to create
> > temporary tables very quickly with no overhead over the original query would
> > be useful.
>
> I wonder if this requires what the standard refers to as a global
> temporary table. As I read it (which may be wrong, I find the language
> obtuse), a global temporary table is a temporary table whose structure
> is predefined. So, you'd define it once, updating the catalog only once
> but still get a table that is emptied each startup.
>
> Ofcourse, it may not be what the standard means, but it still seems
> like a useful idea, to cut down on schema bloat.

IIRC that's the exact syntax Oracle uses:

CREATE GLOBAL TEMPORARY TABLE ...

I always found it a bit odd, since it always seemed to me like a global
temporary table would be one that every backend could read... something
akin to a real table that doesn't worry about fsync or any of that (and
is potentially not backed on disk at all).
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461