On Tue, Jan 3, 2012 at 11:17 AM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On Tue, Jan 3, 2012 at 3:24 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>> I feel like the first thing we should be doing here is some
>> benchmarking. If we change just the scans in dependency.c and then
>> try the test case Tom suggested (dropping a schema containing a large
>> number of functions), we can compare the patched code with master and
>> get an idea of whether the performance is acceptable.
>
> Yes, I've done this and it takes 2.5s to drop 10,000 functions using
> an MVCC snapshot.
>
> That was acceptable to *me*, so I didn't try measuring using just SnapshotNow.
>
> We can do a lot of tests but at the end its a human judgement. Is 100%
> correct results from catalog accesses worth having when the real world
> speed of it is not substantially very good? (Whether its x1000000
> times slower or not is not relevant if it is still fast enough).
Sure. But I don't see why that means it wouldn't be nice to know
whether or not it is in fact a million times slower. If Tom's
artificial worst case is a million times slower, then probably there
are some cases we care about more that are going to be measurably
impacted, and we're going to want to think about what to do about
that. We can wait until you've finished the patch before we do that
testing, or we can do it now and maybe get some idea whether the
approach is likely to be viable or whether it's going to require some
adjustment before we actually go trawl through all that code.
On my laptop, dropping a schema with 10,000 functions using commit
d5448c7d31b5af66a809e6580bae9bd31448bfa7 takes 400-500 ms. If my
laptop is the same speed as your laptop, that would mean a 5-6x
slowdown, but of course that's comparing apples and oranges... in any
event, if the real number is anywhere in that ballpark, it's probably
a surmountable obstacle, but I'd like to know rather than guessing.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company