Thread: Great Bridge benchmark results for Postgres, 4 others

Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Greetings all,

At long last, here are the results of the benchmarking tests that
Great Bridge conducted in its initial exploration of PostgreSQL.  We
held it up so we could test the shipping release of the new
Interbase 6.0.  This is a news release that went out today.

The release is also on our website at
http://www.greatbridge.com/news/p_081420001.html.  Graphics of the
AS3AP and TPC-C test results are at
http:/www.greatbridge.com/img/as3ap.gif and
http://www.greatbridge.com/img/tpc-c.gif respectively.

I'll try and field any questions anyone has, or refer you to someone
who can.

Best regards,

Ned Lilly
VP Hacker Relations
Great Bridge, LLC

--

Open source database routs competition in new benchmark tests

PostgreSQL meets or exceeds speed and scalability of proprietary
database leaders, and significantly surpasses open source
competitors


NORFOLK, Va, August 14, 2000 -PostgreSQL, the world's most advanced
open source database, routed the competition in recent benchmark
testing, topping the proprietary database leaders in
industry-standard transaction-processing tests. PostgreSQL, also
known as "Postgres," is an object-relational database management
system (DBMS) that newly formed Great Bridge LLC will professionally
market, service and support. Postgres also consistently outperformed
open source competitors, including MySQL and Interbase, in the
benchmark tests. Great Bridge will market Postgres-based open source
solutions as a highly reliable and lower cost option for businesses
seeking an alternative to proprietary databases.

On the ANSI SQL Standard Scalable And Portable (AS3AP) benchmark, a
rudimentary information retrieval test that measures raw speed and
scalability, Postgres performed an average of four to five times
faster than every other database tested, including two major
proprietary DBMS packages, the MySQL open source database, and
Interbase, a formerly proprietary product which was recently made
open source by Inprise/Borland. (See Exhibit 1)

In the Transaction Processing Council's TPC-C test, which simulates
a real-world online transaction processing (OLTP) environment,
Postgres consistently matched the performance of the two leading
proprietary database applications. (See Exhibit 2)  The two industry
leaders cannot be mentioned by name because their restrictive
licensing agreements prohibit anyone who buys their closed-source
products from publishing their company names in benchmark testing
results without the companies' prior approval.

"The test results show that Postgres is a robust, well-built product
that must be considered in the same category as enterprise-level
competition," said Robert Gilbert, Great Bridge President and CEO.
"Look at the trendlines in the AS3AP test:  Postgres, like the
proprietary leaders, kept a relatively consistent output level all
the way up to 100 concurrent users - and that output was four to
five times faster than the proprietary products.  Interbase and
MySQL fell apart under heavy usage.  That's a strong affirmation
that Postgres today is a viable alternative to the market-leading
proprietary databases in terms of performance and scalability-and
the clear leader among open source databases."

The tests were conducted by Xperts Inc. of Richmond, Virginia, an
independent technology solutions company, using Quest Software's
Benchmark Factory application.  Both the AS3AP and the TPC-C
benchmarks simulated transactions by one to 100 simultaneous users
in a client-server environment. One hundred concurrent users
approximates the middle range of a traditional database user pool;
many applications never see more than a few users on the system at
any given time, while other more sophisticated enterprise platforms
number concurrent users in the thousands.  In a Web-based
application, where the connection to the database is measured in
milliseconds, 100 simultaneous users would represent a substantial
load-the equivalent of 100 customers hitting the "submit" button on
an order form at exactly the same time.

The AS3AP test measures raw database data retrieval power, showing
an application's scalability, portability and ease of use and
interpretation through the use of simple ANSI standard SQL queries.
The TPC-C test simulates a warehouse distribution system, including
order creation, customer payments, order status checking, delivery,
and inventory management.

"What stood out for us was the consistent performance of Postgres,
which stayed the same or tested better than those of the leading
proprietary applications. Postgres performed consistently whether it
was being used by one or 100 people," said Richard Brosnahan, senior
software developer at Xperts.

Postgres is a standards-based object-relational SQL database
designed for e-business and enterprise applications. The software is
open source and freely owned, continuously augmented by a global
collaborative community of elite programmers who volunteer their
time and expertise to improve the product. In the last two years,
with the introduction of versions 6.5 and 7.0 of the software,
Postgres  has seen rapid enhancement through a series of high-level
refinements.

"Postgres' performance is a powerful affirmation of the open source
method of development," said Gilbert of Great Bridge. "Hundreds,
even thousands, of open source developers work on this software,
demonstrating a rate of innovation and improvement that the
proprietary competition simply can't match.  And it's only going to
get better."

A closer look

Xperts ran the benchmark tests on Compaq Proliant ML350 servers with
512 mb of RAM and two 18.2 Gb hard disks, equipped with Intel
Pentium III processors and Red Hat Linux 6.1 and Windows NT
operating systems.  The company ensured the tests' consistency by
using the same computers for each test, with each product connecting
to the tests through its own preferred ODBC driver.  While Benchmark
Factory does provide native drivers for some commercial databases,
using each product's own ODBC ensured the most valid "apples to
apples" comparison.

In the AS3AP tests, PostgreSQL 7.0 significantly outperformed both
the leading commercial and open source applications in speed and
scalability.  In the tested configuration, Postgres peaked at 1127.8
transactions per second with five users, and still processed at a
steady rate of 1070.4 with 100 users. The proprietary leader also
performed consistently, with a high of 314.15 transactions per
second with eight users, which fell slightly to 288.37 transactions
per second with 100 users. The other leading proprietary database
also demonstrated consistency, running at 200.21 transactions per
second with six users and 197.4 with 100.

The other databases tested against the AS3AP benchmarks, open source
competitors MySQL 3.22 and Interbase 6.0, demonstrated some speed
with a low number of users but a distinct lack of scalability. MySQL
reached a peak of 803.48 with two users, but its performance fell
precipitously under the stress of additional users to a rate of
117.87 transactions per second with 100 users. Similarly, Interbase
reached 424 transactions per second with four users, but its
performance declined steadily with additional users, dropping off to
146.86 transactions per second with 100 users.

"It's just astounding, and unexpected," said Xperts' Brosnahan of
Postgres' performance. "I ran the test twice to make sure it was
running right. Postgres is just a really powerful database."

In the TPC-C tests, Postgres performed neck and neck with the two
leading proprietary databases.  The test simultaneously runs five
different types of simulated transactions; the attached graph of
test results (Exhibit 2) shows steadily ascending intertwined lines
representing all three databases, suggesting the applications scaled
at comparable rates. With all five transactions running with 100
users, the three databases performed at a rate of slightly above
five transactions per second.

"The TPC-C is a challenging test with five transactions running at
once while querying against the database and the stress of a growing
number of users. It showed that all the databases we tested handle
higher loads very well, the way they should," Brosnahan explained.

Neither Interbase nor MySQL could be tested for TPC-C benchmarks.
MySQL could not run the test because the application is not
adequately compliant with minimal ANSI SQL standards set in 1992.
Interbase 6.0, recently released as open source, does not have a
stable ODBC driver yet; while Xperts was able to adapt the version 5
ODBC driver for the AS3AP tests, the TPC-C test would not run.
"With MySQL it's an inherent design issue. Interbase 6 should run
the TPC-C test, and perhaps would with tweaking of the test's code,"
said Brosnahan.

Great Bridge's Gilbert attributes Postgres' high performance to a
quality differential that comes from the open source development
process; the source code for Postgres has been subjected to years of
rigorous peer review by some of the best programmers in the world,
many of whom use the product in their work environments.  "Great
Bridge believes that Postgres is by far the most robust open source
database available.  These tests provide strong affirmation of that
belief," he said.  The company intends to work with hardware vendors
and other interested parties to continue larger-scale testing of
Postgres and other leading open source technologies.

About Great Bridge

Great Bridge LLC provides open source solutions powered by
PostgreSQL, the world's most advanced open source database.  Great
Bridge delivers value-added open source software and support
services based on PostgreSQL, empowering e-business builders with an
enterprise-class database and tools at a fraction of the cost of
closed, proprietary alternatives.

Headquartered in Norfolk, Virginia, Great Bridge is a privately held
company funded by Landmark Communications, Inc., the media company
that also owns The Weather Channel, weather.com, and national and
international interests in newspapers, broadcasting, electronic
publishing, and interactive media.

# # #



Re: Great Bridge benchmark results for Postgres, 4 others

From
"Bryan White"
Date:
> Greetings all,
>
> At long last, here are the results of the benchmarking tests that
> Great Bridge conducted in its initial exploration of PostgreSQL.  We
> held it up so we could test the shipping release of the new
> Interbase 6.0.  This is a news release that went out today.
>
> The release is also on our website at
> http://www.greatbridge.com/news/p_081420001.html.  Graphics of the
> AS3AP and TPC-C test results are at
> http:/www.greatbridge.com/img/as3ap.gif and
> http://www.greatbridge.com/img/tpc-c.gif respectively.

This looks great.  Better than I would have expected.  However I have some
concerns.

1) Using only ODBC drivers.  I don't know how much of an impact a driver can
make but it would seem that using native drivers would shutdown one source
of objections.

2) Postgres has the 'vacuum' process which is typically run nightly which if
not accounted for in the benchmark would give Postgres an artificial edge.
I don't know how you would account for it but in fairness I think it should
be acknowledged.  Do the other big databases have similar maintenance
issues?

3) The test system has 512MB RAM.  Given the licensing structure and high
licencing fees, users have an incentive to use much larger amounts of RAM.
Someone who can only afford 512MB probably can't afford the big names
anyway.

4) The artical does not mention the Speed or Number of CPUs or anything
about the disks other than size.  I can halfway infer that they are SCSI but
how are they layed out.

I am not trying to tear the benchmark down.  Just wanting it more immune to
such attempts.



Re: Great Bridge benchmark results for Postgres, 4 others

From
"Steve Wolfe"
Date:
> 1) Using only ODBC drivers.  I don't know how much of an impact a driver
can
> make but it would seem that using native drivers would shutdown one source
> of objections.

  Using ODBC is guaranteed to slow down the benchmark.  I've seen native
database drivers beat ODBC by anywhere from a factor of two to an order of
magnitude.

steve


Re: Great Bridge benchmark results for Postgres, 4 others

From
The Hermit Hacker
Date:
On Mon, 14 Aug 2000, Steve Wolfe wrote:

> > 1) Using only ODBC drivers.  I don't know how much of an impact a driver
> can
> > make but it would seem that using native drivers would shutdown one source
> > of objections.
>
>   Using ODBC is guaranteed to slow down the benchmark.  I've seen native
> database drivers beat ODBC by anywhere from a factor of two to an order of
> magnitude.

I haven't had a chance to take a look at the benchmarks yet, having just
seen this, but *if* Great Bridge performed their benchmarks such that all
the databases were access via ODBC, then they are using an
'apples-to-apples' approach, as each will have similar slowdowns as a
result ...



Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
comparison as possible.  Of the 5 databases we tested, a native driver existed for
only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
had to rely on ODBC.  So we used the vendor's own ODBC for each of the other two
cases.

<disclaimer>
As with all benchmarks, your mileage will vary according to hardware, OS, and of
course the specific application.  What we attempted to do here was use two
industry-standard benchmarks and treat all five products the same.
</disclaimer>

Presumably, if the vendor had taken the time to write a native driver for
Postgres, the results would have seen an even bigger kick.  We don't have any
reason to think that the results for all five tests in native driver mode would be
out of proportion to the results we got through ODBC.

Regards,
Ned


The Hermit Hacker wrote:

> On Mon, 14 Aug 2000, Steve Wolfe wrote:
>
> > > 1) Using only ODBC drivers.  I don't know how much of an impact a driver
> > can
> > > make but it would seem that using native drivers would shutdown one source
> > > of objections.
> >
> >   Using ODBC is guaranteed to slow down the benchmark.  I've seen native
> > database drivers beat ODBC by anywhere from a factor of two to an order of
> > magnitude.
>
> I haven't had a chance to take a look at the benchmarks yet, having just
> seen this, but *if* Great Bridge performed their benchmarks such that all
> the databases were access via ODBC, then they are using an
> 'apples-to-apples' approach, as each will have similar slowdowns as a
> result ...


Re: Great Bridge benchmark results for Postgres, 4 others

From
The Hermit Hacker
Date:
On Mon, 14 Aug 2000, Ned Lilly wrote:

> Bryan, see my earlier post re: ODBC... will try and answer your other questions
> here...
>
> > 2) Postgres has the 'vacuum' process which is typically run nightly which if
> > not accounted for in the benchmark would give Postgres an artificial edge.
> > I don't know how you would account for it but in fairness I think it should
> > be acknowledged.  Do the other big databases have similar maintenance
> > issues?
>
> Don't know how this would affect the results directly.  The benchmark
> app builds the database clean each time, and takes about 18 hours to
> run for the full 100 users (for each product).  So each database
> created was coming in with a clean slate, with no issues of unclaimed
> space or what have you...

do the tests only perform SELECTs?  Any UPDATEs or DELETEs will create
unclaimed space ...

> True, and it's a fair question how each database would make use of
> more RAM.  My guess, however, is that it wouldn't boost the
> transactions per second number - where more RAM would impact the
> numbers would be more sustained performance in higher numbers of
> concurrent users.  Postgres and the two proprietary databases all kept
> fairly flat lines (good) as the number of users edged up.  We plan to
> continuously re-run the tests with more users and bigger iron, so as
> we do that, we'll keep the community informed.

Actually, more RAM would permit you to increase both the -B parameters as
well as the -S one ... which are both noted for providing performance
increases ... -B more on repeative queries and -S on anything involving
ORDER BY or GROUP BY ...

Again, without knowing the specifics of the queries, whether either of the
above would make a difference is unknown ...


Re: Great Bridge benchmark results for Postgres, 4 others

From
Andrew Snow
Date:

On Mon, 14 Aug 2000, Ned Lilly wrote:

> Bryan, see my earlier post re: ODBC... will try and answer your other questions
> here...
>
> > 2) Postgres has the 'vacuum' process which is typically run nightly which if
> > not accounted for in the benchmark would give Postgres an artificial edge.
> > I don't know how you would account for it but in fairness I think it should
> > be acknowledged.  Do the other big databases have similar maintenance
> > issues?
>
> Don't know how this would affect the results directly.  The benchmark app builds
> the database clean each time, and takes about 18 hours to run for the full 100
> users (for each product).  So each database created was coming in with a clean
> slate, with no issues of unclaimed space or what have you...

Does a vacuum analyze not get run at all? Could this affect performance or
is it that not relevant in these benchmarks?



Regards,
Andrew



TPC (was Great Bridge benchmark results for Postgres, 4 others)

From
Alex Pilosov
Date:
A more interesting benchmark would be to compare TPC/C results on same
kind of hardware other vendors use for THEIR TPC benchmarks, which are
posted on tpc.org, as well as comparing price/performance of each.

TPC as run by company 'commissioned by GB' cannot be validated and
accepted into TPC database, they must be run under close supervision by
TPC-approved monitors. I hope GB actually springs for the price of running
the REAL TPC benchmark (last I heard it was around 25k$).

To see how postgres performs on low-end (for TPC low-end is <8 processors)
would be interesting to say the least.

A problem with a real TPC is the strong suggestion to run a transaction
manager, to improve speed. No transaction manager supports postgres yet.

Another note on TPC is that they require to include as a final price
support contract, on which GreatBridge should be able to compete.


On Mon, 14 Aug 2000, Ned Lilly wrote:

> Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
> comparison as possible.  Of the 5 databases we tested, a native driver existed for
> only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
> had to rely on ODBC.  So we used the vendor's own ODBC for each of the other two
> cases.
>
> <disclaimer>
> As with all benchmarks, your mileage will vary according to hardware, OS, and of
> course the specific application.  What we attempted to do here was use two
> industry-standard benchmarks and treat all five products the same.
> </disclaimer>
>
> Presumably, if the vendor had taken the time to write a native driver for
> Postgres, the results would have seen an even bigger kick.  We don't have any
> reason to think that the results for all five tests in native driver mode would be
> out of proportion to the results we got through ODBC.
>
> Regards,
> Ned
>
>
> The Hermit Hacker wrote:
>
> > On Mon, 14 Aug 2000, Steve Wolfe wrote:
> >
> > > > 1) Using only ODBC drivers.  I don't know how much of an impact a driver
> > > can
> > > > make but it would seem that using native drivers would shutdown one source
> > > > of objections.
> > >
> > >   Using ODBC is guaranteed to slow down the benchmark.  I've seen native
> > > database drivers beat ODBC by anywhere from a factor of two to an order of
> > > magnitude.
> >
> > I haven't had a chance to take a look at the benchmarks yet, having just
> > seen this, but *if* Great Bridge performed their benchmarks such that all
> > the databases were access via ODBC, then they are using an
> > 'apples-to-apples' approach, as each will have similar slowdowns as a
> > result ...
>
>


Re: TPC (was Great Bridge benchmark results for Postgres, 4others)

From
Ned Lilly
Date:
Hi Alex,

Absolutely, as I said, we did this benchmarking for our own internal due diligence in
understanding PostgreSQL's capabilities.  It's not intended to be a formal big-iron TPC
test, like you see at tpc.org.  The software we used was one commercial vendor's
implementation of the published AS3AP and TPC-C specs - it's the same one used by a lot
of trade magazines.

Benchmarking will be a significant part of Great Bridge's ongoing contribution to the
PostgreSQL community - starting with these relatively simple tests, and scaling up to
larger systems over time.

Regards,
Ned

Alex Pilosov wrote:

> A more interesting benchmark would be to compare TPC/C results on same
> kind of hardware other vendors use for THEIR TPC benchmarks, which are
> posted on tpc.org, as well as comparing price/performance of each.
>
> TPC as run by company 'commissioned by GB' cannot be validated and
> accepted into TPC database, they must be run under close supervision by
> TPC-approved monitors. I hope GB actually springs for the price of running
> the REAL TPC benchmark (last I heard it was around 25k$).
>
> To see how postgres performs on low-end (for TPC low-end is <8 processors)
> would be interesting to say the least.
>
> A problem with a real TPC is the strong suggestion to run a transaction
> manager, to improve speed. No transaction manager supports postgres yet.
>
> Another note on TPC is that they require to include as a final price
> support contract, on which GreatBridge should be able to compete.
>
> On Mon, 14 Aug 2000, Ned Lilly wrote:
>
> > Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
> > comparison as possible.  Of the 5 databases we tested, a native driver existed for
> > only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
> > had to rely on ODBC.  So we used the vendor's own ODBC for each of the other two
> > cases.
> >
> > <disclaimer>
> > As with all benchmarks, your mileage will vary according to hardware, OS, and of
> > course the specific application.  What we attempted to do here was use two
> > industry-standard benchmarks and treat all five products the same.
> > </disclaimer>
> >
> > Presumably, if the vendor had taken the time to write a native driver for
> > Postgres, the results would have seen an even bigger kick.  We don't have any
> > reason to think that the results for all five tests in native driver mode would be
> > out of proportion to the results we got through ODBC.
> >
> > Regards,
> > Ned
> >
> >
> > The Hermit Hacker wrote:
> >
> > > On Mon, 14 Aug 2000, Steve Wolfe wrote:
> > >
> > > > > 1) Using only ODBC drivers.  I don't know how much of an impact a driver
> > > > can
> > > > > make but it would seem that using native drivers would shutdown one source
> > > > > of objections.
> > > >
> > > >   Using ODBC is guaranteed to slow down the benchmark.  I've seen native
> > > > database drivers beat ODBC by anywhere from a factor of two to an order of
> > > > magnitude.
> > >
> > > I haven't had a chance to take a look at the benchmarks yet, having just
> > > seen this, but *if* Great Bridge performed their benchmarks such that all
> > > the databases were access via ODBC, then they are using an
> > > 'apples-to-apples' approach, as each will have similar slowdowns as a
> > > result ...
> >
> >


RE: TPC (was Great Bridge benchmark results for Postgres, 4others)

From
"Dan Browning"
Date:
This benchmark had a lot of value for the job that was going to use ODBC.
Pretty obvious that Postgres blows away everyone else in the ODBC dept.  I'm
not sure if this shows that PGsql is best-performer, or if it just shows
that the other db's have sucky ODBC implimentations.

Too bad it doesn't show us what the performance would have been with native
drivers.  Hopefully someone will develop an interface that each db supports
at full speed on a minimum-functionality level (like ODBC, but faster).
Maybe that would shed some more light.  Perl::DBI comes close to this, but
your still relying on the quality of the module's implimentation.  Oh well.

But, I must detail that I will be using PGsql for my current .com project
(online ordering, etc).  This choice is over Sybase, Interbase, and MySQL.
I found the price / performance / features just couldn't be beat with PGsql.



> -----Original Message-----
> From: pgsql-general-owner@hub.org
> [mailto:pgsql-general-owner@hub.org]On
> Behalf Of Ned Lilly
> Sent: Monday, August 14, 2000 9:35 PM
> To: Alex Pilosov
> Cc: The Hermit Hacker; PostgreSQL General
> Subject: Re: TPC (was [GENERAL] Great Bridge benchmark results for
> Postgres, 4others)
>
>
> Hi Alex,
>
> Absolutely, as I said, we did this benchmarking for our own
> internal due diligence in
> understanding PostgreSQL's capabilities.  It's not intended
> to be a formal big-iron TPC
> test, like you see at tpc.org.  The software we used was one
> commercial vendor's
> implementation of the published AS3AP and TPC-C specs - it's
> the same one used by a lot
> of trade magazines.
>
> Benchmarking will be a significant part of Great Bridge's
> ongoing contribution to the
> PostgreSQL community - starting with these relatively simple
> tests, and scaling up to
> larger systems over time.
>
> Regards,
> Ned
>
> Alex Pilosov wrote:
>
> > A more interesting benchmark would be to compare TPC/C
> results on same
> > kind of hardware other vendors use for THEIR TPC
> benchmarks, which are
> > posted on tpc.org, as well as comparing price/performance of each.
> >
> > TPC as run by company 'commissioned by GB' cannot be validated and
> > accepted into TPC database, they must be run under close
> supervision by
> > TPC-approved monitors. I hope GB actually springs for the
> price of running
> > the REAL TPC benchmark (last I heard it was around 25k$).
> >
> > To see how postgres performs on low-end (for TPC low-end is
> <8 processors)
> > would be interesting to say the least.
> >
> > A problem with a real TPC is the strong suggestion to run a
> transaction
> > manager, to improve speed. No transaction manager supports
> postgres yet.
> >
> > Another note on TPC is that they require to include as a final price
> > support contract, on which GreatBridge should be able to compete.
> >
> > On Mon, 14 Aug 2000, Ned Lilly wrote:
> >
> > > Marc's right... we opted for ODBC to ensure as much of an
> "apples to apples"
> > > comparison as possible.  Of the 5 databases we tested, a
> native driver existed for
> > > only the two (ahem) unnamed proprietary products -
> Postgres, Interbase, and MySQL
> > > had to rely on ODBC.  So we used the vendor's own ODBC
> for each of the other two
> > > cases.
> > >
> > > <disclaimer>
> > > As with all benchmarks, your mileage will vary according
> to hardware, OS, and of
> > > course the specific application.  What we attempted to do
> here was use two
> > > industry-standard benchmarks and treat all five products the same.
> > > </disclaimer>
> > >
> > > Presumably, if the vendor had taken the time to write a
> native driver for
> > > Postgres, the results would have seen an even bigger
> kick.  We don't have any
> > > reason to think that the results for all five tests in
> native driver mode would be
> > > out of proportion to the results we got through ODBC.
> > >
> > > Regards,
> > > Ned
> > >
> > >
> > > The Hermit Hacker wrote:
> > >
> > > > On Mon, 14 Aug 2000, Steve Wolfe wrote:
> > > >
> > > > > > 1) Using only ODBC drivers.  I don't know how much
> of an impact a driver
> > > > > can
> > > > > > make but it would seem that using native drivers
> would shutdown one source
> > > > > > of objections.
> > > > >
> > > > >   Using ODBC is guaranteed to slow down the
> benchmark.  I've seen native
> > > > > database drivers beat ODBC by anywhere from a factor
> of two to an order of
> > > > > magnitude.
> > > >
> > > > I haven't had a chance to take a look at the benchmarks
> yet, having just
> > > > seen this, but *if* Great Bridge performed their
> benchmarks such that all
> > > > the databases were access via ODBC, then they are using an
> > > > 'apples-to-apples' approach, as each will have similar
> slowdowns as a
> > > > result ...
> > >
> > >
>
>


Re: Great Bridge benchmark results for Postgres, 4 others

From
Jeff Hoffmann
Date:
Ned Lilly wrote:
>
> Greetings all,
>
> At long last, here are the results of the benchmarking tests that
> Great Bridge conducted in its initial exploration of PostgreSQL.  We
> held it up so we could test the shipping release of the new
> Interbase 6.0.  This is a news release that went out today.
>
> The release is also on our website at
> http://www.greatbridge.com/news/p_081420001.html.  Graphics of the
> AS3AP and TPC-C test results are at
> http:/www.greatbridge.com/img/as3ap.gif and
> http://www.greatbridge.com/img/tpc-c.gif respectively.
>
> I'll try and field any questions anyone has, or refer you to someone
> who can.

i haven't played with interbase yet, but my understanding is they have
two types of server -- the "classic" (process per connection?) and a
"superserver" (multithreaded).  i'm guessing the multithreaded is faster
(why bother with the added complexity if it isn't?)  so which version
did you run this test against?

the other question i have is if it was possible that the disks were a
bottleneck in the test process.  it seems strange that three databases
would perform nearly identically for so long if there wasn't a
bottleneck somewhere.  were the drives striped?  did you consider
performing the test with faster raid arrays?  on a related note, i was
looking through a couple of back issues of db2 magazine, and it struck
me how much optimization and other performance hints there were
available there & how little there was for postgres.  is great bridge
planning on creating a knowledge base of these optimizations for the
public? or are you planning optimization as one of the commercial
services you provide? or some of both?

jeff

Re: Great Bridge benchmark results for Postgres, 4 others

From
Tatsuo Ishii
Date:
> Greetings all,
>
> At long last, here are the results of the benchmarking tests that
> Great Bridge conducted in its initial exploration of PostgreSQL.  We
> held it up so we could test the shipping release of the new
> Interbase 6.0.  This is a news release that went out today.
>
> The release is also on our website at
> http://www.greatbridge.com/news/p_081420001.html.  Graphics of the
> AS3AP and TPC-C test results are at
> http:/www.greatbridge.com/img/as3ap.gif and
> http://www.greatbridge.com/img/tpc-c.gif respectively.
>
> I'll try and field any questions anyone has, or refer you to someone
> who can.

Great work!

BTW, was the postmaster configured to have an option "-o -F" to
disable fsync()?
--
Tatsuo Ishii

Re: Great Bridge benchmark results for Postgres, 4 others

From
Chris Bitmead
Date:
Can you tell us what version of the (ahem) unnamed proprietary products
you used? :-). For example if you used version 8i of an unnamed
proprietry product, that might be informative :-).

Ned Lilly wrote:
>
> Marc's right... we opted for ODBC to ensure as much of an "apples to apples"
> comparison as possible.  Of the 5 databases we tested, a native driver existed for
> only the two (ahem) unnamed proprietary products - Postgres, Interbase, and MySQL
> had to rely on ODBC.  So we used the vendor's own ODBC for each of the other two
> cases.
>
> <disclaimer>
> As with all benchmarks, your mileage will vary according to hardware, OS, and of
> course the specific application.  What we attempted to do here was use two
> industry-standard benchmarks and treat all five products the same.
> </disclaimer>
>
> Presumably, if the vendor had taken the time to write a native driver for
> Postgres, the results would have seen an even bigger kick.  We don't have any
> reason to think that the results for all five tests in native driver mode would be
> out of proportion to the results we got through ODBC.
>
> Regards,
> Ned
>
> The Hermit Hacker wrote:
>
> > On Mon, 14 Aug 2000, Steve Wolfe wrote:
> >
> > > > 1) Using only ODBC drivers.  I don't know how much of an impact a driver
> > > can
> > > > make but it would seem that using native drivers would shutdown one source
> > > > of objections.
> > >
> > >   Using ODBC is guaranteed to slow down the benchmark.  I've seen native
> > > database drivers beat ODBC by anywhere from a factor of two to an order of
> > > magnitude.
> >
> > I haven't had a chance to take a look at the benchmarks yet, having just
> > seen this, but *if* Great Bridge performed their benchmarks such that all
> > the databases were access via ODBC, then they are using an
> > 'apples-to-apples' approach, as each will have similar slowdowns as a
> > result ...

RE: Great Bridge benchmark results for Postgres, 4 others

From
"Dan Browning"
Date:
> Can you tell us what version of the (ahem) unnamed
> proprietary products
> you used? :-). For example if you used version 8i of an unnamed
> proprietry product, that might be informative :-).

Oh, but even if you can't tell us what version was used, I'm sure you could
tell us that story about the monster you saw last week.  But which monster
was it?  Was it the monster that ATE EYEs?  And I remember you once said
there was a second monster, could you describe it as well?



Re: Great Bridge benchmark results for Postgres, 4 others

From
Mark Kirkwood
Date:
Excellent  result ! -

Great to see some benchmarking of Postgresql and the competition....and to see it kick ass !

.... but a cautionary note about test "even handedness" - certain current versions of
"proprietary databases" will exhaust 512MB RAM  with 100 users... I know this because I have
performed similar tests of Posgresql
+ "other unspecified databases" myself. It would be interesting to see memory + swap + disk
utilization
profiles of the test machine with the various databases.

To give the show away a bit, against a certian well known "propriety database" I had to enable
"nofsync" to
match its performance ( which invalidates a tpc c benchmark I think - no failsafe...) .

Not to be a negative Elephant about this, the low memory footprint of Postgresql is a great
strength, and should be marketed as such.... !

In a related vein, is it possible that any relevant database parameter settings might be
published to help folk get the best out of their Postgresql systems ? ( apologies if they are
there and I missed them )

Regards

Mark




Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Bryan, see my earlier post re: ODBC... will try and answer your other questions
here...

> 2) Postgres has the 'vacuum' process which is typically run nightly which if
> not accounted for in the benchmark would give Postgres an artificial edge.
> I don't know how you would account for it but in fairness I think it should
> be acknowledged.  Do the other big databases have similar maintenance
> issues?

Don't know how this would affect the results directly.  The benchmark app builds
the database clean each time, and takes about 18 hours to run for the full 100
users (for each product).  So each database created was coming in with a clean
slate, with no issues of unclaimed space or what have you...

> 3) The test system has 512MB RAM.  Given the licensing structure and high
> licencing fees, users have an incentive to use much larger amounts of RAM.
> Someone who can only afford 512MB probably can't afford the big names
> anyway.

True, and it's a fair question how each database would make use of more RAM.  My
guess, however, is that it wouldn't boost the transactions per second number -
where more RAM would impact the numbers would be more sustained performance in
higher numbers of concurrent users.  Postgres and the two proprietary databases
all kept fairly flat lines (good) as the number of users edged up.  We plan to
continuously re-run the tests with more users and bigger iron, so as we do that,
we'll keep the community informed.

> 4) The artical does not mention the Speed or Number of CPUs or anything
> about the disks other than size.  I can halfway infer that they are SCSI but
> how are they layed out.

Yep, the disks were 2x 18 gig Wide SCSI, hot pluggable.  The CPU was a single
600 Mhz Pentium III.

> I am not trying to tear the benchmark down.  Just wanting it more immune to
> such attempts.

Not a problem, happy to try and answer any questions.  Again, this is not
intended as a categoric statement of Postgres' superiority in any and all
circumstances.  It's an attempt to share our research with the community on our
best attempt at a first-pass "apples to apples" comparison among the 5
products.  I should also note that since the source to the benchmarks was not
available to us, including in many cases even the SQL queries, we couldn't do
much in the way of "tuning" that you'd normally want your DBA to do.  Although
again, that limitation applied for all five products.

Regards,
Ned


Re: Great Bridge benchmark results for Postgres, 4 others

From
Adrian Phillips
Date:
>>>>> "Ned" == Ned Lilly <ned@greatbridge.com> writes:

<snip>

    Ned> The other databases tested against the AS3AP benchmarks, open
    Ned> source competitors MySQL 3.22 and Interbase 6.0, demonstrated
    Ned> some speed with a low number of users but a distinct lack of
    Ned> scalability. MySQL reached a peak of 803.48 with two users,
    Ned> but its performance fell precipitously under the stress of
    Ned> additional users to a rate of 117.87 transactions per second
    Ned> with 100 users. Similarly, Interbase reached 424 transactions
    Ned> per second with four users, but its performance declined
    Ned> steadily with additional users, dropping off to 146.86
    Ned> transactions per second with 100 users.

It would have been more interesting if MySQL 3.23 had been tested as
this has reached what seems to be a fairly stable beta and seems to
perform some operations significantly faster than 3.22 and I believe
may scale somewhat better as well. Of course it may not be so
interesting for most PostgreSQL users :-)

Sincerely,

Adrian Phillips

--
Your mouse has moved.
Windows NT must be restarted for the change to take effect.
Reboot now?  [OK]

warning message

From
Steve Heaven
Date:
When I do a
vacuum analyze ma_b;

I get this message
NOTICE:  Index ma_idx: NUMBER OF INDEX' TUPLES (17953) IS NOT THE SAME AS
HEAP'
(17952)

It it anything to worry about and how do I fix it?

Thanks

Steve
--
thorNET  - Internet Consultancy, Services & Training
Phone: 01454 854413
Fax:   01454 854412
http://www.thornet.co.uk

Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Hi Jeff,

> i haven't played with interbase yet, but my understanding is they have
> two types of server -- the "classic" (process per connection?) and a
> "superserver" (multithreaded).  i'm guessing the multithreaded is faster
> (why bother with the added complexity if it isn't?)  so which version
> did you run this test against?

Classic.  Superserver didn't work with the ODBC driver.  Richard Brosnahan,
the lead engineer on the project at Xperts, could connect, but could not
successfully build tables and load them due to SQL errors.  Feel free to
contact him directly (he's cc'ed here).

> the other question i have is if it was possible that the disks were a
> bottleneck in the test process.  it seems strange that three databases
> would perform nearly identically for so long if there wasn't a
> bottleneck somewhere.  were the drives striped?  did you consider
> performing the test with faster raid arrays?

The disks were not striped.  We may look at RAID in the future, but again,
this was only a simple low-end test.  It seems reasonable to assume that the
disks were a bottleneck, but they would have been a bottleneck for all of
the databases.

> on a related note, i was
> looking through a couple of back issues of db2 magazine, and it struck
> me how much optimization and other performance hints there were
> available there & how little there was for postgres.  is great bridge
> planning on creating a knowledge base of these optimizations for the
> public? or are you planning optimization as one of the commercial
> services you provide? or some of both?

Yes, yes, and yes.  We'll have more to say about our commercial services in
the near future, but there will always be a substantial free, publicly
available knowledgebase as part of our commitment to the open source
community.

Regards,
Ned


Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Good question, Tatsuo... We ran it with and without fsync() - there was
only a 2-3% difference between the two.


Tatsuo Ishii wrote:

> Great work!
>
> BTW, was the postmaster configured to have an option "-o -F" to
> disable fsync()?
> --
> Tatsuo Ishii


Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Oh, Dan, I'm not that clever... ;-)

But I *can* tell you that the market leading proprietary RDBMS products we
tested were not IBM, Informix, or Sybase.

Regards,
Ned


Dan Browning wrote:

> > Can you tell us what version of the (ahem) unnamed
> > proprietary products
> > you used? :-). For example if you used version 8i of an unnamed
> > proprietry product, that might be informative :-).
>
> Oh, but even if you can't tell us what version was used, I'm sure you could
> tell us that story about the monster you saw last week.  But which monster
> was it?  Was it the monster that ATE EYEs?  And I remember you once said
> there was a second monster, could you describe it as well?


Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Hi Adrian,

We only used the released versions of each database.  We'd be happy to run
the tests again when MySQL 3.23 is official, or when Interbase ships a
real ODBC driver for 6.0 for that matter.

Regards,
Ned

Adrian Phillips wrote:

> It would have been more interesting if MySQL 3.23 had been tested as
> this has reached what seems to be a fairly stable beta and seems to
> perform some operations significantly faster than 3.22 and I believe
> may scale somewhat better as well. Of course it may not be so
> interesting for most PostgreSQL users :-)
>
> Sincerely,
>
> Adrian Phillips
>
> --
> Your mouse has moved.
> Windows NT must be restarted for the change to take effect.
> Reboot now?  [OK]


Re: Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Mark Kirkwood wrote:

> In a related vein, is it possible that any relevant database parameter settings might be
> published to help folk get the best out of their Postgresql systems ? ( apologies if they are
> there and I missed them )

Hi Mark, here's some more info from the lead engineer on the project for Xperts, Richard Brosnahan
(cc'ed here).  Please feel free to contact him directly.

--

With PostgreSQL, we increased the size of the cache, and increased the
number of simultaneous users. We did this by starting the database with a
command that included parameters for this purpose. Out of the box,
PostgreSQL is very conservative with resource use, and thus only allows 32
simultaneous connections. Increasing the number of simultaneous users
requires an increase in cache size. This boost in cache size also boosts
performace by a small margin.

We also executed a process called "vacuum analyze" after loading the tables,
but before the test. This process optimizes indexes and frees up disk space
a bit. The optimized indexes boost performance by some margin.




Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Doh!  Sorry, I didn't cc Richard Brosnahan after all.  He's at
<rbrosnahan@xperts.com>


Ned Lilly wrote:

> Hi Jeff,
>
> > i haven't played with interbase yet, but my understanding is they have
> > two types of server -- the "classic" (process per connection?) and a
> > "superserver" (multithreaded).  i'm guessing the multithreaded is faster
> > (why bother with the added complexity if it isn't?)  so which version
> > did you run this test against?
>
> Classic.  Superserver didn't work with the ODBC driver.  Richard Brosnahan,
> the lead engineer on the project at Xperts, could connect, but could not
> successfully build tables and load them due to SQL errors.  Feel free to
> contact him directly (he's cc'ed here).


Re: Great Bridge benchmark results for Postgres, 4 others

From
"Ross J. Reedstrom"
Date:
On Tue, Aug 15, 2000 at 12:21:25PM -0400, Ned Lilly wrote:
> Oh, Dan, I'm not that clever... ;-)
>
> But I *can* tell you that the market leading proprietary RDBMS products we
> tested were not IBM, Informix, or Sybase.
>

And in reply to the MySQL version comment/question, Ned said:
 "We only used the released versions of each database."

I took that to mean they used the latest released version of each
database.  One thing I couldn't deduce: which operating system where the
commercial RDBMs run on top of? NT for one of them, for sure, but the
other can probably run on either of the quoted OSs. If it was run on NT,
we might be seeing the linux vs. NT effect.

Ross
--
Ross J. Reedstrom, Ph.D., <reedstrm@rice.edu>
NSBRI Research Scientist/Programmer
Computer and Information Technology Institute
Rice University, 6100 S. Main St.,  Houston, TX 77005

Tuning PostgreSQL to use more RAM...

From
"Steve Wolfe"
Date:
> Actually, more RAM would permit you to increase both the -B parameters as
> well as the -S one ... which are both noted for providing performance
> increases ... -B more on repeative queries and -S on anything involving
> ORDER BY or GROUP BY ...

  For a while now, I've been meaning to investigate how to get PostgreSQL to
take advantage of the RAM in our machine.  It has 512 megs, and most of the
time, about 275-400 megs of it simply go to disk cache & buffer, as nothing
else wants it.  Occasionally, we'll only have 250-300 megs of disk cache.
: )

   While I don't mind disk cache, I feel that we could get better
performance by letting postgres use another hundred megs or so, especially
since our entire /usr/local/pgsql/base directory has only 134 megs of data.
We're currently starting the postmaster with "-B 2048".  The machine has 4
Xeon processors, and 5 drives in the RAID array, so we do have a small bit
of CPU power and disk throughput.  Any suggestions or pointers are welcome.

steve



Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
"Ross J. Reedstrom" wrote:

> On Tue, Aug 15, 2000 at 12:21:25PM -0400, Ned Lilly wrote:
> > Oh, Dan, I'm not that clever... ;-)
> >
> > But I *can* tell you that the market leading proprietary RDBMS products we
> > tested were not IBM, Informix, or Sybase.
> >
>
> And in reply to the MySQL version comment/question, Ned said:
>  "We only used the released versions of each database."
>
> I took that to mean they used the latest released version of each
> database.  One thing I couldn't deduce: which operating system where the
> commercial RDBMs run on top of? NT for one of them, for sure, but the
> other can probably run on either of the quoted OSs. If it was run on NT,
> we might be seeing the linux vs. NT effect.

One of them ran on NT, the other four ran on Red Hat Linux 6.1.


Re: Great Bridge benchmark results for Postgres, 4 others

From
Chris Bitmead
Date:
Ned Lilly wrote:
>
> Oh, Dan, I'm not that clever... ;-)
>
> But I *can* tell you that the market leading proprietary RDBMS products we
> tested were not IBM, Informix, or Sybase.

That's very helpful. Can you also tell us if Proprietry 1 or Proprietry
2 was definitely NOT MS-SQL Server?

>
> Regards,
> Ned
>
> Dan Browning wrote:
>
> > > Can you tell us what version of the (ahem) unnamed
> > > proprietary products
> > > you used? :-). For example if you used version 8i of an unnamed
> > > proprietry product, that might be informative :-).
> >
> > Oh, but even if you can't tell us what version was used, I'm sure you could
> > tell us that story about the monster you saw last week.  But which monster
> > was it?  Was it the monster that ATE EYEs?  And I remember you once said
> > there was a second monster, could you describe it as well?

Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Er... let me put it this way.  Proprietary 2 prefers to run on Windows NT.


Chris Bitmead wrote:

> That's very helpful. Can you also tell us if Proprietry 1 or Proprietry
> 2 was definitely NOT MS-SQL Server?


Re: Great Bridge benchmark results for Postgres, 4 others

From
Alfred Perlstein
Date:
> Chris Bitmead wrote:
>
> > That's very helpful. Can you also tell us if Proprietry 1 or Proprietry
> > 2 was definitely NOT MS-SQL Server?

* Ned Lilly <ned@greatbridge.com> [000815 18:59] wrote:
> Er... let me put it this way.  Proprietary 2 prefers to run on Windows NT.

It's oracle??? j/k

You have some people in San Jose at the Expo right?  I was going to head
over to it tomorrow if you do.

-Alfred

Re: Great Bridge benchmark results for Postgres, 4 others

From
Chris Bitmead
Date:
Ned Lilly wrote:
>
> Er... let me put it this way.  Proprietary 2 prefers to run on Windows NT.

The performance is so bad it must be MS-Access :-).

> Chris Bitmead wrote:
>
> > That's very helpful. Can you also tell us if Proprietry 1 or Proprietry
> > 2 was definitely NOT MS-SQL Server?

Re: Great Bridge benchmark results for Postgres, 4 others

From
Fabrice Scemama
Date:
Quoted from http://www.greatbridge.com/about/ourteam.html :

> Ned Lilly
> Vice President of Evangelism and Hacker Relations
>
> Ned Lilly brings significant experience in business development, operations
> management and technology strategy to Great Bridge. Before joining Great
> Bridge, he served as a New Ventures Director for Landmark Communications,
> where he was instrumental in building the business plan for Great Bridge. Lilly
> has also held senior management positions in several Internet startups, including
> Vice President of an automotive auction business and General Manager of an
> online reservations company. He has managed the development of multiple
> technology architectures on open source software systems. He holds a BA from
> the University of Virginia and MA from George Washington University.

Ned, I just love Postgres... I strongly believe it can compete
with major commercial DBMS, and that it rules over free DBMS
(be opensource or not, like MySQL).

But I think Postgres' performance should not be over-boasted,
because such behaviour could only mislead and possibly deceipt
future users. As a commercial consulting company, you might
consider adding some disclaimers to your benchmarks.

Fabrice

Re: Great Bridge benchmark results for Postgres, 4 others

From
Ned Lilly
Date:
Hi Fabrice,

We just ran the benchmarks, the same software that the trade magazines use when they're
evaluating commercial products.  The results speak for themselves.

We certainly don't want to over-boast... and I can assure you that every assertion in
that story was double and triple-checked for accuracy.  People can draw their own
conclusions from the results - like all benchmarks, it's only useful inasmuch as it
gives you a directional indicator about the capabilities of the product.  Particularly
in this case, since it was only a single-processor machine with only 1-100 users.  But
we wanted to share the results of our testing with the community, and perhaps stimulate
more formal testing by other "unbiased" parties (e.g. the technical trade press).

Regards,
Ned



Fabrice Scemama wrote:

> Ned, I just love Postgres... I strongly believe it can compete
> with major commercial DBMS, and that it rules over free DBMS
> (be opensource or not, like MySQL).
>
> But I think Postgres' performance should not be over-boasted,
> because such behaviour could only mislead and possibly deceipt
> future users. As a commercial consulting company, you might
> consider adding some disclaimers to your benchmarks.
>
> Fabrice


Re: Great Bridge benchmark results for Postgres, 4 others

From
"Adam Ruth"
Date:
I just want to add that these benchmarks actually somewhat validate my own
testing.
I was evaluating PostgreSQL vs. MS SQL Server two months ago.  I ran a
series of tests that I felt approximated the load that was then current.  We
had a database that ran on MS SQL Server, and I was trying to convince
management to switch to PostgreSQL.  They weren't too happy about forking
over $12,000 just to license MSSQL on the 4 processor box.

My testing showed that for small (meaning simple) queries, which was the
lion's share of the work, PostgreSQL was about 20% faster than SQL Server.
Inserts, updates, and deletes were on par, and could vary from each other by
about 10% either way.  It seemed that PostgreSQL was slower when inserting
records into tables with may indexes when the tables had many records (many
being > 500,000).  The more complex the query got, the faster MS SQL Server
became.  It seemed to be able to use an index in places that PostgreSQL
couldn't, and could use parallelism for some of the larger queries.  But
since those kinds of queries are rare, they didn't impact the decision much.
Optimization is a tricky business to begin with.

They scaled up about the same.  The only need is for about 25 internal
users, and a few concurrent internet users (using connection pooling).  We
didn't test above that, because we didn't have the resources.

This company expects to grow their database greatly.  It's currently at
about 70,000 records, but they expect to reach 500,000 in the not too
distant future.  Toward that end, I performed the tests with 2,000,000
records, just to be sure.

This seems to weigh in on the advantage of PostgreSQL, but it doesn't tell
the whole story.  The SQL Server machine was a Compaq 4x650 Xeon box, with
512 MB of RAM.  The PostgreSQL machine was a Gateway ALR 1x600 Pentium III,
with 512 MB of RAM.

--
Adam Ruth
InterCation, Inc.
www.intercation.com


"Ned Lilly" <ned@greatbridge.com> wrote in message
news:399AA636.790ED86E@greatbridge.com...
> Hi Fabrice,
>
> We just ran the benchmarks, the same software that the trade magazines use
when they're
> evaluating commercial products.  The results speak for themselves.
>
> We certainly don't want to over-boast... and I can assure you that every
assertion in
> that story was double and triple-checked for accuracy.  People can draw
their own
> conclusions from the results - like all benchmarks, it's only useful
inasmuch as it
> gives you a directional indicator about the capabilities of the product.
Particularly
> in this case, since it was only a single-processor machine with only 1-100
users.  But
> we wanted to share the results of our testing with the community, and
perhaps stimulate
> more formal testing by other "unbiased" parties (e.g. the technical trade
press).
>
> Regards,
> Ned
>
>
>
> Fabrice Scemama wrote:
>
> > Ned, I just love Postgres... I strongly believe it can compete
> > with major commercial DBMS, and that it rules over free DBMS
> > (be opensource or not, like MySQL).
> >
> > But I think Postgres' performance should not be over-boasted,
> > because such behaviour could only mislead and possibly deceipt
> > future users. As a commercial consulting company, you might
> > consider adding some disclaimers to your benchmarks.
> >
> > Fabrice
>



Re: Tuning PostgreSQL to use more RAM...

From
Tom Lane
Date:
"Steve Wolfe" <steve@iboats.com> writes:
>    While I don't mind disk cache, I feel that we could get better
> performance by letting postgres use another hundred megs or so, especially
> since our entire /usr/local/pgsql/base directory has only 134 megs of data.
> We're currently starting the postmaster with "-B 2048".

You might try increasing the default -S setting as well.

I am not convinced that increasing -B to huge values will be a net win.
At some point you will start losing performance due to the sequential
scans of the buffer cache that are done at transaction commit (and other
places IIRC).  I haven't done any benchmarking of different settings,
however, so I have no idea what the optimal level might be.

            regards, tom lane

Re: warning message

From
Tom Lane
Date:
Steve Heaven <steve@thornet.co.uk> writes:
> When I do a
> vacuum analyze ma_b;

> I get this message
> NOTICE:  Index ma_idx: NUMBER OF INDEX' TUPLES (17953) IS NOT THE SAME AS
> HEAP' (17952)

> It it anything to worry about and how do I fix it?

Dropping and recreating the index would probably make the message go
away.  We've been puzzling over sporadic reports of this notice for
some time now (check the pghackers archives).  If you can come up with
a reproducible way of getting into this state, we'd love to see it.
We know how to create a situation where there are fewer index than
heap tuples --- and that's harmless, at least for the known way of
causing it.  But we've got no idea how there could be more index than
heap entries.

            regards, tom lane