Thread: AW: Berkeley DB...

AW: Berkeley DB...

From
Zeugswetter Andreas SB
Date:
> Frankly, based on my experience with Berkeley DB, I'd bet on mine.
> I can do 2300 tuple fetches per CPU per second, with linear scale-
> up to at least four processors (that's what we had on the box we
> used).  That's 9200 fetches a second.  Performance isn't going
> to be the deciding issue.

Wow, that sounds darn slow. Speed of a seq scan on one CPU, 
one disk should give you more like 19000 rows/s with a small record size.
Of course you are probably talking about random fetch order here,
but we need fast seq scans too.

(10 Mb/s disk, 111 b/row, no cpu bottleneck, nothing cached , 
Informix db, select count(*) ... where notindexedfield != 'notpresentvalue';
Table pages interleaved with index pages, tabsize 337 Mb 
(table with lots of insert + update + delete history) )

Andreas


Re: AW: Berkeley DB...

From
Hannu Krosing
Date:
Zeugswetter Andreas SB wrote:
> 
> > Frankly, based on my experience with Berkeley DB, I'd bet on mine.
> > I can do 2300 tuple fetches per CPU per second, with linear scale-
> > up to at least four processors (that's what we had on the box we
> > used).  That's 9200 fetches a second.  Performance isn't going
> > to be the deciding issue.
> 
> Wow, that sounds darn slow. Speed of a seq scan on one CPU,
> one disk should give you more like 19000 rows/s with a small record size.
> Of course you are probably talking about random fetch order here,
> but we need fast seq scans too.

Could someone test this on MySQL with bsddb storage that should be out
by now ?

Could be quite indicative of what we an expect.

> (10 Mb/s disk, 111 b/row, no cpu bottleneck, nothing cached ,
> Informix db, select count(*) ... where notindexedfield != 'notpresentvalue';
> Table pages interleaved with index pages, tabsize 337 Mb
> (table with lots of insert + update + delete history) )
> 
> Andreas


Re: Berkeley DB...

From
"Matthias Urlichs"
Date:
Hi,

Hannu Krosing:
> 
> Could someone test this on MySQL with bsddb storage that should be out
> by now ?
> 
As long as the BDB support in mysql doesn't even remotely come close to
running their own benchmark suite, I for one will not be using it for
any kind of indicative speed test...

... that being said (and I took a quick test with 10000 randomly-inserted
records and fetched them in index order) if the data's in the cache, the
speed difference is insignificant. 

I did this:

create table foo (a int not null,b char(100));
create index foo_a on foo(a);
for(i=0; i<10000; i++) {   insert into foo(a,b) values( `((i*3467)%10000)` , 'fusli');
}
select a from foo order by a;


Times for the insert loop:
14   MySQL-MyISAM
23   PostgreSQL (no fsync)
53   MySQL-BDB (with fsync -- don't know how to turn it off yet)

The select:
0.75  MySQL-MyISAM
0.77  MySQL-BDB
2.43  PostgreSQL

I'll do a "real" test once the BDB support in MySQL is stable enough to
run the MySQL benchmark suite.

Anyway, this quick and dirty test seems to show that BDB doesn't
slow down data retrieval.


NB, the select loop was using an index scan in all cases.

-- 
Matthias Urlichs  |  noris network GmbH   |   smurf@noris.de  |  ICQ: 20193661
The quote was selected randomly. Really.       |        http://smurf.noris.de/
-- 
"If the vendors started doing everything right, we would be out of a job.Let's hear it for OSI and X!  With those
babiesin the wings, we can count onbeing employed until we drop, or get smart and switch to gardening, paperfolding, or
something."--C. Philip Wood
 


Re: AW: Berkeley DB...

From
"Michael A. Olson"
Date:
At 12:59 PM 5/25/00 +0200, Zeugswetter Andreas SB wrote:

> Wow, that sounds darn slow. Speed of a seq scan on one CPU, 
> one disk should give you more like 19000 rows/s with a small record size.
> Of course you are probably talking about random fetch order here,
> but we need fast seq scans too.

The test was random reads on a 250GB database.  I don't have a
similar characterization for sequential scans off the top of my
head.                mike