Thread: patch - per-tablespace random_page_cost/seq_page_cost

patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
Well, I was regretting missing the deadline for this CommitFest and
then realized today was only the 14th, so I finished this up while the
kids were napping.

I ended up not reusing the reloptions.c code.  It looks like a lot of
extra complexity for no obvious benefit, considering that there is no
equivalent of AMs for tablespaces and therefore no need to support
AM-specific options.  I did reuse the reloptions syntax, and I think
the internal representation could always be redone later, if we find
that there's a use case for something more complicated.

...Robert

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Greg Stark
Date:
On Sat, Nov 14, 2009 at 7:28 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> I ended up not reusing the reloptions.c code.  It looks like a lot of
> extra complexity for no obvious benefit, considering that there is no
> equivalent of AMs for tablespaces and therefore no need to support
> AM-specific options.  I did reuse the reloptions syntax, and I think
> the internal representation could always be redone later, if we find
> that there's a use case for something more complicated.

a) effective_io_concurrency really deserves to be in the list as well.

b) I thought Tom came down pretty stridently against any data model
which hard codes a specific list of supported options. I can't
remember exactly what level of flexibility he wanted but I think
"doesn't require catalog changes to add a new option" might have been
it.

I agree that having everything smashed to text is a bit kludgy though.
I'm not sure we have the tools to do much better though.


--
greg


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sat, Nov 14, 2009 at 3:06 PM, Greg Stark <gsstark@mit.edu> wrote:
> On Sat, Nov 14, 2009 at 7:28 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> I ended up not reusing the reloptions.c code.  It looks like a lot of
>> extra complexity for no obvious benefit, considering that there is no
>> equivalent of AMs for tablespaces and therefore no need to support
>> AM-specific options.  I did reuse the reloptions syntax, and I think
>> the internal representation could always be redone later, if we find
>> that there's a use case for something more complicated.
>
> a) effective_io_concurrency really deserves to be in the list as well.
>
> b) I thought Tom came down pretty stridently against any data model
> which hard codes a specific list of supported options. I can't
> remember exactly what level of flexibility he wanted but I think
> "doesn't require catalog changes to add a new option" might have been
> it.
>
> I agree that having everything smashed to text is a bit kludgy though.
> I'm not sure we have the tools to do much better though.

I'm hoping Tom will reconsider, or at least flesh out his thinking.
What the reloptions code does is create a general framework for
handling options.  Everything gets smashed down to text[], and then
when we actually need to use the reloptions we parse them into a C
struct appropriate to the underlying object type.  This is really the
only feasible design, because pg_class contains multiple different
types of objects - in particular, tables and indices - and indices in
turn come in multiple types, depending on the AM.  So the exact
options that are legal depend on the the type of object, and for
indices the AM, and we populate a *different* C struct depending on
the situation.  pg_tablespace, on the other hand, only contains one
type of object: a tablespace.  So, if we stored the options as text[],
we'd parse them out into a C struct just as we do for pg_class, but
unlike the pg_class case, it would always be the *same* C struct.

In other words, we CAN'T use dedicated columns for pg_class because we
can't know in advance precisely what columns will be needed - it
depends on what AMs someone chooses to load up.  For pg_tablespace, we
know exactly what columns will be needed, and the answer is exactly
those options that we choose to support, because tablespaces are not
extensible.

That's my thinking, anyway...  YMMV.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> ....  pg_tablespace, on the other hand, only contains one
> type of object: a tablespace.  So, if we stored the options as text[],
> we'd parse them out into a C struct just as we do for pg_class, but
> unlike the pg_class case, it would always be the *same* C struct.

The same, until it's different.  There is no reason at all to suppose
that the set of options people will want to apply to a tablespace will
remain constant over time --- in fact, I don't think there's even a
solid consensus right now on which GUCs people would want to set at the
tablespace level.  I don't believe it is wise to hardwire this into the
catalog schema.  Yes, it would look marginally nicer from a theoretical
standpoint, but we'd be forever having to revise the schema, plus a lot
of downstream code (pg_dump for example); which is not only significant
work but absolutely prevents making any adjustments except at major
version boundaries.  And I don't see any concrete benefit that we get
out of a hardwired schema for these things.  It's not like we care about
optimizing searches for tablespaces having a particular option setting,
for example.
        regards, tom lane


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sat, Nov 14, 2009 at 4:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> ....  pg_tablespace, on the other hand, only contains one
>> type of object: a tablespace.  So, if we stored the options as text[],
>> we'd parse them out into a C struct just as we do for pg_class, but
>> unlike the pg_class case, it would always be the *same* C struct.
>
> The same, until it's different.  There is no reason at all to suppose
> that the set of options people will want to apply to a tablespace will
> remain constant over time --- in fact, I don't think there's even a
> solid consensus right now on which GUCs people would want to set at the
> tablespace level.  I don't believe it is wise to hardwire this into the
> catalog schema.  Yes, it would look marginally nicer from a theoretical
> standpoint, but we'd be forever having to revise the schema, plus a lot
> of downstream code (pg_dump for example); which is not only significant
> work but absolutely prevents making any adjustments except at major
> version boundaries.  And I don't see any concrete benefit that we get
> out of a hardwired schema for these things.  It's not like we care about
> optimizing searches for tablespaces having a particular option setting,
> for example.

I can tell I've lost this argument, but I still don't get it.  Why do
we care if we have to change the schema?  It's not a lot of work, and
the number of times we would likely bump catversion for new
pg_tablespace options seems unlikely to be significant in the grand
scheme of things.  I don't think there are very many parameters that
make sense to set per-tablespace.  As for major version boundaries, it
seems almost unimaginable that we would backpatch code to add a new
tablespace option whether the schema permits it or not.  Can you
clarify the nature of your concern here?

What I'm concerned about with text[] is that I *think* it's going to
force us to invent an analog of the relcache for tablespaces.  With
hardwired columns, a regular catcache is all we need.  But the
reloptions stuff is designed to populate a struct, and once we
populate that struct we have to have someplace to hang it - or I guess
maybe we could reparse it on every call to cost_seqscan(),
cost_index(), genericcostestimate(), etc, but that doesn't seem like a
great idea.  So it seems like we'll need another caching layer sitting
over the catcache.  If we already had such a beast it would be
reasonable to add this in, but I would assume that we wouldn't want to
add such a thing without a fairly clear use case that I'm not sure we
have.  Maybe you see it differently?  Or do you have some idea for a
simpler way to set this up?

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> I can tell I've lost this argument, but I still don't get it.  Why do
> we care if we have to change the schema?  It's not a lot of work,

Try doing it a few times.  Don't forget to keep psql and pg_dump
apprised of which PG versions contain which columns.  Not to mention
other tools such as pgAdmin that might like to show these settings.
It gets old pretty fast.

> What I'm concerned about with text[] is that I *think* it's going to
> force us to invent an analog of the relcache for tablespaces.

I'm not really convinced of that, but even if we do, so what?  It's not
that much code to have an extra cache watching the syscache traffic.
There's an example in parse_oper.c of a specialized cache that's about
as complicated as this would be.  It's about 150 lines including copious
comments.  We didn't even bother to split it out into its own source
file.

>  With
> hardwired columns, a regular catcache is all we need.  But the
> reloptions stuff is designed to populate a struct, and once we
> populate that struct we have to have someplace to hang it - or I guess
> maybe we could reparse it on every call to cost_seqscan(),
> cost_index(), genericcostestimate(), etc, but that doesn't seem like a
> great idea.

Well, no, we would not do it that way.  I would imagine instead that
plancat.c would be responsible for attaching appropriate cost values to
each RelOptInfo struct, so it'd be more like one lookup per referenced
table per query.  It's possible that a cache would be useful even at
that load level, but I'm not convinced.
        regards, tom lane


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sat, Nov 14, 2009 at 6:36 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I'm not really convinced of that, but even if we do, so what?  It's not
> that much code to have an extra cache watching the syscache traffic.
> There's an example in parse_oper.c of a specialized cache that's about
> as complicated as this would be.  It's about 150 lines including copious
> comments.  We didn't even bother to split it out into its own source
> file.

Well, if it's that simple maybe it's not too bad.  I'll take a look at that one.

>>  With
>> hardwired columns, a regular catcache is all we need.  But the
>> reloptions stuff is designed to populate a struct, and once we
>> populate that struct we have to have someplace to hang it - or I guess
>> maybe we could reparse it on every call to cost_seqscan(),
>> cost_index(), genericcostestimate(), etc, but that doesn't seem like a
>> great idea.
>
> Well, no, we would not do it that way.  I would imagine instead that
> plancat.c would be responsible for attaching appropriate cost values to
> each RelOptInfo struct, so it'd be more like one lookup per referenced
> table per query.  It's possible that a cache would be useful even at
> that load level, but I'm not convinced.

I'm not sure exactly what you mean by the last sentence, but my
current design attaches the tablespace OID to RelOptInfo (for baserels
only, of course) and IndexOptInfo, and the costing functions trigger
the actual lookup of the page costs.  I guess that might be slightly
inferior to actually attaching the actualized values to the
RelOptInfo, since each possible index-path needs the values for both
the index and the underlying table.

I will take another crack at it.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Bernd Helmle
Date:

--On 14. November 2009 20:22:42 -0500 Robert Haas <robertmhaas@gmail.com> 
wrote:

> I will take another crack at it.
>
> ...Robert

I take this that you are going to provide a new patch version?

-- 
Thanks
Bernd


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Mon, Nov 16, 2009 at 4:37 AM, Bernd Helmle <mailings@oopsware.de> wrote:
> --On 14. November 2009 20:22:42 -0500 Robert Haas <robertmhaas@gmail.com>
> wrote:
>
>> I will take another crack at it.
>>
>> ...Robert
>
> I take this that you are going to provide a new patch version?

Yes.  I'm not sure whether or not it will be in time for this CF, however.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sat, Nov 14, 2009 at 4:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I don't think there's even a
> solid consensus right now on which GUCs people would want to set at the
> tablespace level.

This seems like an important point that we need to nail down.  The
original motivation for this patch was based on seq_page_cost and
random_page_cost, to cover the case where, for example, one tablespace
is on an SSD and another tablespace is on a RAID array.

Greg Stark proposed adding effective_io_concurrency, and that makes
plenty of sense to me, but I'm sort of disinclined to attempt to
implement that as part of this patch because I have no familiarity
with that part of the code and no hardware that I can use to test
either the current behavior or the modified behavior.  Since I'm
recoding this to use the reloptions mechanism, a patch to add support
for that should be pretty easy to write as a follow-on patch once this
goes in.

Any other suggestions?

Current version of patch is attached.  I've revised it to use the
reloptions stuff, but I don't think it's committable as-is because it
currently thinks that extracting options from a pg_tablespace tuple is
a cheap operation, which was true in the non-reloptions-based
implementation but is less true now.  At least, some benchmarking
needs to be done to figure out whether and to what extent this is an
issue.

...Robert

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Thu, Nov 26, 2009 at 4:25 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> Current version of patch is attached.  I've revised it to use the
> reloptions stuff, but I don't think it's committable as-is because it
> currently thinks that extracting options from a pg_tablespace tuple is
> a cheap operation, which was true in the non-reloptions-based
> implementation but is less true now.  At least, some benchmarking
> needs to be done to figure out whether and to what extent this is an
> issue.

Hmm.  I'm not able to reliably detect a performance difference between
unpatched CVS HEAD (er... git master branch) and same with
spcoptions-v2.patch applied.  I figured that if there were going to be
an impact, it would be most likely to manifest itself in a query that
touches lots and lots of tables but does very little actual work.  So
I used the attached script to create 200 empty tables, 100 in the
default tablespace and 100 in tablespace "dork" (also known as, why I
am working on this at 11 PM on Thanksgiving).  Then I did:

SELECT * FROM a1, a2, a3, ..., a100;

...and likewise for the bn.  I tried this on an unpatched install and
also with the patch applied, with and without options set on
tablespace dork.  I tried it a couple of times and the times were
pretty consistent on any given run, but bounced around enough between
runs that I can't say with any confidence that this patch makes any
difference one way or the other.

So it seems as if there is little reason to worry about caching, as
Tom suspected, unless someone sees a flaw in my testing methodology.
It might matter more in the future, if we have a larger number of
tablespace options, but we could always add a cache then if need be.

...Robert

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
"David Rowley"
Date:
Robert Haas Wrote:
> Hmm.  I'm not able to reliably detect a performance difference between
> unpatched CVS HEAD (er... git master branch) and same with spcoptions-
> v2.patch applied.  I figured that if there were going to be an impact,
> it would be most likely to manifest itself in a query that touches lots
> and lots of tables but does very little actual work.  So I used the
> attached script to create 200 empty tables, 100 in the default
> tablespace and 100 in tablespace "dork" (also known as, why I am
> working on this at 11 PM on Thanksgiving).  Then I did:
> 
> SELECT * FROM a1, a2, a3, ..., a100;

(I've not read the patch, but I've just read the thread)
If you're just benchmarking the planner times to see if the extra lookups
are affecting the planning times, would it not be better to benchmark
EXPLAIN SELECT * FROM a1, a2, a3, ..., a100; ?
Otherwise any small changes might be drowned out in the execution time.
Scanning 100 relations even if they are empty could account for quite a bit
of that time, right?

David



Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sat, Nov 28, 2009 at 9:54 PM, David Rowley <dgrowley@gmail.com> wrote:
> Robert Haas Wrote:
>> Hmm.  I'm not able to reliably detect a performance difference between
>> unpatched CVS HEAD (er... git master branch) and same with spcoptions-
>> v2.patch applied.  I figured that if there were going to be an impact,
>> it would be most likely to manifest itself in a query that touches lots
>> and lots of tables but does very little actual work.  So I used the
>> attached script to create 200 empty tables, 100 in the default
>> tablespace and 100 in tablespace "dork" (also known as, why I am
>> working on this at 11 PM on Thanksgiving).  Then I did:
>>
>> SELECT * FROM a1, a2, a3, ..., a100;
>
> (I've not read the patch, but I've just read the thread)
> If you're just benchmarking the planner times to see if the extra lookups
> are affecting the planning times, would it not be better to benchmark
> EXPLAIN SELECT * FROM a1, a2, a3, ..., a100; ?
> Otherwise any small changes might be drowned out in the execution time.
> Scanning 100 relations even if they are empty could account for quite a bit
> of that time, right?

Possibly, but even if I can measure a difference doing it that way,
it's not clear that it matters.  It's fairly certain that there will
be a performance degradation if we measure carefully enough, but if
that difference is imperceptible in real-world scanerios, then it's
not worth worrying about.  Still, I probably will test it just to see.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Thu, Dec 3, 2009 at 11:00 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Nov 28, 2009 at 9:54 PM, David Rowley <dgrowley@gmail.com> wrote:
>> Robert Haas Wrote:
>>> Hmm.  I'm not able to reliably detect a performance difference between
>>> unpatched CVS HEAD (er... git master branch) and same with spcoptions-
>>> v2.patch applied.  I figured that if there were going to be an impact,
>>> it would be most likely to manifest itself in a query that touches lots
>>> and lots of tables but does very little actual work.  So I used the
>>> attached script to create 200 empty tables, 100 in the default
>>> tablespace and 100 in tablespace "dork" (also known as, why I am
>>> working on this at 11 PM on Thanksgiving).  Then I did:
>>>
>>> SELECT * FROM a1, a2, a3, ..., a100;
>>
>> (I've not read the patch, but I've just read the thread)
>> If you're just benchmarking the planner times to see if the extra lookups
>> are affecting the planning times, would it not be better to benchmark
>> EXPLAIN SELECT * FROM a1, a2, a3, ..., a100; ?
>> Otherwise any small changes might be drowned out in the execution time.
>> Scanning 100 relations even if they are empty could account for quite a bit
>> of that time, right?
>
> Possibly, but even if I can measure a difference doing it that way,
> it's not clear that it matters.  It's fairly certain that there will
> be a performance degradation if we measure carefully enough, but if
> that difference is imperceptible in real-world scanerios, then it's
> not worth worrying about.  Still, I probably will test it just to see.

I did some fairly careful benchmarking of EXPLAIN SELECT * FROM a1,
a2, a3, ..., a100.  I explained this query 100 times via DBD::Pg and
used time to measure how long the script took to run.  I ran the
script three times.  And the result is... the unpatched version came
out 1.7% SLOWER than the patched version.  This seems difficult to
take seriously, since it can't possibly be faster to do a syscache
lookup and parse an array than it is to fetch a constant from a known
memory address, but that's what I got.  At any rate, it seems pretty
clear that it's not hurting much.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Thu, Nov 26, 2009 at 4:25 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Sat, Nov 14, 2009 at 4:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> I don't think there's even a
>> solid consensus right now on which GUCs people would want to set at the
>> tablespace level.
>
> This seems like an important point that we need to nail down.  The
> original motivation for this patch was based on seq_page_cost and
> random_page_cost, to cover the case where, for example, one tablespace
> is on an SSD and another tablespace is on a RAID array.
>
> Greg Stark proposed adding effective_io_concurrency, and that makes
> plenty of sense to me, but I'm sort of disinclined to attempt to
> implement that as part of this patch because I have no familiarity
> with that part of the code and no hardware that I can use to test
> either the current behavior or the modified behavior.  Since I'm
> recoding this to use the reloptions mechanism, a patch to add support
> for that should be pretty easy to write as a follow-on patch once this
> goes in.
>
> Any other suggestions?

Going once...  going twice...  since no one has suggested anything or
spoken against the proposal above, I'm just going to implement
seq_page_cost and random_page_cost for now.

> Current version of patch is attached.  I've revised it to use the
> reloptions stuff, but I don't think it's committable as-is because it
> currently thinks that extracting options from a pg_tablespace tuple is
> a cheap operation, which was true in the non-reloptions-based
> implementation but is less true now.  At least, some benchmarking
> needs to be done to figure out whether and to what extent this is an
> issue.

Per the email that I just sent a few minutes ago, there doesn't appear
to be a performance impact in doing this even in a relatively stupid
way - every call that requires seq_page_cost and/or random_page_cost
results in a syscache lookup and then uses the relcache machinery to
parse the returned array.

That leaves the question of what the most elegant design is here.  Tom
suggested upthread that we should tag every RelOptInfo - and,
presumably, IndexOptInfo, though it wasn't discussed - with this
information.  I don't however much like the idea of adding identically
named members in both places.  Should the number of options expand in
the future, this will become silly very quickly.  One option is to
define a struct with seq_page_cost and random_page_cost that is then
included in RelOptInfo and IndexOptInfo.  It would seem to make sense
to make the struct, rather than a pointer to the struct, the member,
because it makes the copyfuncs/equalfuncs stuff easier to handle, and
there's not really any benefit in incurring more palloc overhead.

However, I'm sort of inclined to go ahead and invent a mini-cache for
tablespaces.  It avoids the (apparently insignificant) overhead of
reparsing the array multiple times, but it also avoids bloating
RelOptInfo and IndexOptInfo with more members than really necessary.
It seems like a good idea to add one member to those structures
anyway, for reltablespace, but copying all the values into every one
we create just seems silly.  Admittedly there are only two values
right now, but again we may want to add more someday, and caching at
the tablespace level just seems like the right way to do it.

Thoughts?

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Thu, Dec 17, 2009 at 9:15 PM, Robert Haas <robertmhaas@gmail.com> wrote:
> Going once...  going twice...  since no one has suggested anything or
> spoken against the proposal above, I'm just going to implement
> seq_page_cost and random_page_cost for now.
[...]
> Per the email that I just sent a few minutes ago, there doesn't appear
> to be a performance impact in doing this even in a relatively stupid
> way - every call that requires seq_page_cost and/or random_page_cost
> results in a syscache lookup and then uses the relcache machinery to
> parse the returned array.
>
> That leaves the question of what the most elegant design is here.  Tom
> suggested upthread that we should tag every RelOptInfo - and,
> presumably, IndexOptInfo, though it wasn't discussed - with this
> information.  I don't however much like the idea of adding identically
> named members in both places.  Should the number of options expand in
> the future, this will become silly very quickly.  One option is to
> define a struct with seq_page_cost and random_page_cost that is then
> included in RelOptInfo and IndexOptInfo.  It would seem to make sense
> to make the struct, rather than a pointer to the struct, the member,
> because it makes the copyfuncs/equalfuncs stuff easier to handle, and
> there's not really any benefit in incurring more palloc overhead.
>
> However, I'm sort of inclined to go ahead and invent a mini-cache for
> tablespaces.  It avoids the (apparently insignificant) overhead of
> reparsing the array multiple times, but it also avoids bloating
> RelOptInfo and IndexOptInfo with more members than really necessary.
> It seems like a good idea to add one member to those structures
> anyway, for reltablespace, but copying all the values into every one
> we create just seems silly.  Admittedly there are only two values
> right now, but again we may want to add more someday, and caching at
> the tablespace level just seems like the right way to do it.
>
> Thoughts?

Hearing no thoughts, I have implemented as per the above.  PFA the
latest version.  Any reviews, comments, feedback, etc. much
appreciated.

Thanks,

...Robert

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Jaime Casanova
Date:
On Mon, Dec 28, 2009 at 2:52 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> Hearing no thoughts, I have implemented as per the above.  PFA the
> latest version.  Any reviews, comments, feedback, etc. much
> appreciated.
>

btw, you need to change
   STATRELATT,

for
   STATRELATTINH,

in syscache.c

--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sun, Jan 3, 2010 at 6:56 PM, Jaime Casanova
<jcasanov@systemguards.com.ec> wrote:
> On Mon, Dec 28, 2009 at 2:52 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>>
>> Hearing no thoughts, I have implemented as per the above.  PFA the
>> latest version.  Any reviews, comments, feedback, etc. much
>> appreciated.
>>
>
> btw, you need to change
>
>    STATRELATT,
>
> for
>
>    STATRELATTINH,
>
> in syscache.c

Hmm, I see this needs to be rebased over Tom's latest changes, but the
conflict I got was in syscache.h, rather than syscache.c.  Not sure if
that's what you were going for or if there's another issue.  Updated
patch attached.

...Robert

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Alvaro Herrera
Date:
> --- 49,63 ----
>    * ----------------
>    */
>   
> ! #define Natts_pg_tablespace                6

Should be 5?




-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Jaime Casanova
Date:
On Sun, Jan 3, 2010 at 10:39 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>>
>> in syscache.c
>
> Hmm, I see this needs to be rebased over Tom's latest changes, but the
> conflict I got was in syscache.h, rather than syscache.c.  Not sure if
> that's what you were going for or if there's another issue.  Updated
> patch attached.
>

ah! yeah! it has been a long holiday ;)


--
Atentamente,
Jaime Casanova
Soporte y capacitación de PostgreSQL
Asesoría y desarrollo de sistemas
Guayaquil - Ecuador
Cel. +59387171157


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Alvaro Herrera
Date:
Robert Haas escribió:

> Hmm, I see this needs to be rebased over Tom's latest changes, but the
> conflict I got was in syscache.h, rather than syscache.c.  Not sure if
> that's what you were going for or if there's another issue.  Updated
> patch attached.

FWIW I think the reloptions code in this patch is sane enough.  The fact
that it was this easily written means that the API for reloptions was
reasonably chosen, thanks :-)


Hmm, it seems we're missing a "need_initialization = false" at the
bottom of initialize_reloptions ...   I'm wondering what happened to
that??

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> Hmm, I see this needs to be rebased over Tom's latest changes, but the
> conflict I got was in syscache.h, rather than syscache.c.  Not sure if
> that's what you were going for or if there's another issue.  Updated
> patch attached.

I'm planning to go look at Naylor's bki refactoring patch now.  Assuming
there isn't any showstopper problem with that, do you object to it
getting committed first?  Either order is going to create a merge
problem, but it seems like we'd be best off to get Naylor's patch in
so people can resync affected patches before the January commitfest
starts.
        regards, tom lane


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Mon, Jan 4, 2010 at 1:39 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> Hmm, I see this needs to be rebased over Tom's latest changes, but the
>> conflict I got was in syscache.h, rather than syscache.c.  Not sure if
>> that's what you were going for or if there's another issue.  Updated
>> patch attached.
>
> I'm planning to go look at Naylor's bki refactoring patch now.  Assuming
> there isn't any showstopper problem with that, do you object to it
> getting committed first?  Either order is going to create a merge
> problem, but it seems like we'd be best off to get Naylor's patch in
> so people can resync affected patches before the January commitfest
> starts.

My only objection to that is that if we're going to add attoptions
also, I'd like to get this committed first before I start working on
that, and we're running short on time.  If you can commit his patch in
the next day or two, then I am fine with rebasing mine afterwards, but
if it needs more work than that then I would prefer to commit mine so
I can move on.  Is that reasonable?

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> My only objection to that is that if we're going to add attoptions
> also, I'd like to get this committed first before I start working on
> that, and we're running short on time.  If you can commit his patch in
> the next day or two, then I am fine with rebasing mine afterwards, but
> if it needs more work than that then I would prefer to commit mine so
> I can move on.  Is that reasonable?

Fair enough --- if I can't get it done today I will let you know and
hold off.
        regards, tom lane


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Mon, Jan 4, 2010 at 10:42 AM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
> Robert Haas escribió:
>
>> Hmm, I see this needs to be rebased over Tom's latest changes, but the
>> conflict I got was in syscache.h, rather than syscache.c.  Not sure if
>> that's what you were going for or if there's another issue.  Updated
>> patch attached.
>
> FWIW I think the reloptions code in this patch is sane enough.  The fact
> that it was this easily written means that the API for reloptions was
> reasonably chosen, thanks :-)

:-)

Actually, there are some things about it that I'm not entirely happy
with, but I haven't brought them up because I don't have a clear idea
what I think we should do about them.  The special-case hack to handle
the "oids" option is one of them.... another, possibly related, is
that I wish we could decouple the options-validation logic from the
backend storage representation.  But those are issues for a future
thread.  I do think it's pretty well-done overall.

> Hmm, it seems we're missing a "need_initialization = false" at the
> bottom of initialize_reloptions ...   I'm wondering what happened to
> that??

It appears that it has never been there.

$ git log -Sneed_initialization master src/backend/access/common/reloptions.c
commit f35e4442a6c9893e72fe870d9e1756262d542027
Author: Alvaro Herrera <alvherre@alvh.no-ip.org>
Date:   Mon Jan 5 17:14:28 2009 +0000
   Change the reloptions machinery to use a table-based parser, and provide   a more complete framework for writing
customoption processing routines   by user-defined access methods. 
   Catalog version bumped due to the general API changes, which are going to   affect user-defined "amoptions"
routines.

That was the original patch that added need_initialization, and it
didn't add that line.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Sun, Jan 3, 2010 at 11:19 PM, Alvaro Herrera
<alvherre@commandprompt.com> wrote:
>
>> --- 49,63 ----
>>    * ----------------
>>    */
>>
>> ! #define Natts_pg_tablespace                         6
>
> Should be 5?

Yep.  I also fixed the other two bits of brain fade that you pointed
out to me via private email.  Updated patch attached.

...Robert

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Mon, Jan 4, 2010 at 1:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> My only objection to that is that if we're going to add attoptions
>> also, I'd like to get this committed first before I start working on
>> that, and we're running short on time.  If you can commit his patch in
>> the next day or two, then I am fine with rebasing mine afterwards, but
>> if it needs more work than that then I would prefer to commit mine so
>> I can move on.  Is that reasonable?
>
> Fair enough --- if I can't get it done today I will let you know and
> hold off.

OK.  I just took a really fast look at that the bki patch and it looks
pretty nice, so I hope you're able to get it in. Of course, I'm biased
because it's based on earlier work of my own, but biased != wrong.
:-)

A lot more work will need to be done to escape the insanity that is
our current method of handling system catalogs, but this seems like a
good step in the right direction.

I also observe that it applies cleanly over my current spcoptions
branch, so the merge conflicts might be a non-issue.

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
John Naylor
Date:
Tom,

It seems I introduced a couple errors in src/tools/msvc/clean.bat in
the bki patch. I'm attaching a cumulative fix. I can resend the
complete updated patch, if you like...

Sorry! :-)
John

> I'm planning to go look at Naylor's bki refactoring patch now.  Assuming
> there isn't any showstopper problem with that, do you object to it
> getting committed first?  Either order is going to create a merge
> problem, but it seems like we'd be best off to get Naylor's patch in
> so people can resync affected patches before the January commitfest
> starts.

Attachment

Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Mon, Jan 4, 2010 at 1:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> My only objection to that is that if we're going to add attoptions
>> also, I'd like to get this committed first before I start working on
>> that, and we're running short on time.  If you can commit his patch in
>> the next day or two, then I am fine with rebasing mine afterwards, but
>> if it needs more work than that then I would prefer to commit mine so
>> I can move on.  Is that reasonable?
>
> Fair enough --- if I can't get it done today I will let you know and
> hold off.

OK, so since you got this done, I'm going to go ahead and rebase &
commit mine today, after a final read-through or two, unless you or
anyone else wants to insert some last-minute objections?

...Robert


Re: patch - per-tablespace random_page_cost/seq_page_cost

From
Robert Haas
Date:
On Tue, Jan 5, 2010 at 10:17 AM, Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Jan 4, 2010 at 1:48 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Robert Haas <robertmhaas@gmail.com> writes:
>>> My only objection to that is that if we're going to add attoptions
>>> also, I'd like to get this committed first before I start working on
>>> that, and we're running short on time.  If you can commit his patch in
>>> the next day or two, then I am fine with rebasing mine afterwards, but
>>> if it needs more work than that then I would prefer to commit mine so
>>> I can move on.  Is that reasonable?
>>
>> Fair enough --- if I can't get it done today I will let you know and
>> hold off.
>
> OK, so since you got this done, I'm going to go ahead and rebase &
> commit mine today, after a final read-through or two, unless you or
> anyone else wants to insert some last-minute objections?

I committed this, but then in looking some things over further today,
I realized that I seem to have done something stupid - namely, not
adding a varlena header to TableSpaceOpts.  I believe that the
attached patch is needed to fix the problem.

(I am not quite sure why we are using bytea here since AFAICS we don't
actually store parsed reloptions structures in any kind of persistent
storage, but clearly overwriting the first four bytes of
random_page_cost with a varlena header is no good.)

...Robert

Attachment