Thread: MySQL million tables

MySQL million tables

From
Christopher Kings-Lynne
Date:
I see in this post on planetmysql.org that this guy got MySQL to die
after creating 250k tables.  Anyone want to see how far PostgreSQL can
go? :)

http://bobfield.blogspot.com/2006/03/million-tables.html

Also, can we beat his estimate of 27 hours for creation?

Chris


Re: MySQL million tables

From
"Jonah H. Harris"
Date:
On 3/9/06, Christopher Kings-Lynne <chriskl@familyhealth.com.au> wrote:
Also, can we beat his estimate of 27 hours for creation?

I just tried it on my a Thinkpad:(Pentium M @ 1.73GHz) running SuSE 10 and an untuned PostgreSQL 8.1.3 using a shell script and psql.  I was able to create 274,000 tables in exactly 798 seconds.

And it's still chugging away :)


Chris


---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq



--
Jonah H. Harris, Database Internals Architect
EnterpriseDB Corporation
732.331.1324

Re: MySQL million tables

From
"Stefan 'Kaishakunin' Schumacher"
Date:
Also sprach Christopher Kings-Lynne (chriskl@familyhealth.com.au)
> I see in this post on planetmysql.org that this guy got MySQL to die
> after creating 250k tables.  Anyone want to see how far PostgreSQL can
> go? :)
>
> http://bobfield.blogspot.com/2006/03/million-tables.html
>
> Also, can we beat his estimate of 27 hours for creation?

for i in `seq 1 1000000`;
do
echo "create table test$i (ts timestamp);"|psql tabletorture;
done

This is a Pentium 3 with 256MB Ram, I'll see how far it gets and how
long it takes.

Now I got ~2600 tables in 4 minutes, so it might take a day to get the
million. However, the MySQL-guy has no statements about his
test-machine, so it's not comparable.



--
PGP FPR: CF74 D5F2 4871 3E5C FFFE 0130 11F4 C41E B3FB AE33
--
Im Klang der Gion-Shoja-Glocken tönt die Vergänglichkeit aller Dinge,
die Farbe der Sala-Blüte offenbart, daß die Erfolgreichen fallen müssen.
Die Stolzen währen nicht ewig, sie vergehen wie der Traum einer Frühlingsnacht.
Die Mächtigen fallen zuletzt, sie sind wie Staub vor dem Winde. Heike Monogatari

Attachment

Re: MySQL million tables

From
Christopher Kings-Lynne
Date:
Another mysql blogger has chimed in now:

http://arjen-lentz.livejournal.com/66547.html

He did it with MyISAM tables though.

Chris

Jonah H. Harris wrote:
> On 3/9/06, *Christopher Kings-Lynne* <chriskl@familyhealth.com.au
> <mailto:chriskl@familyhealth.com.au>> wrote:
>
>     Also, can we beat his estimate of 27 hours for creation?
>
>
> I just tried it on my a Thinkpad:(Pentium M @ 1.73GHz) running SuSE 10
> and an untuned PostgreSQL 8.1.3 using a shell script and psql.  I was
> able to create 274,000 tables in exactly 798 seconds.
>
> And it's still chugging away :)
>
>
>     Chris
>
>
>     ---------------------------(end of
>     broadcast)---------------------------
>     TIP 3: Have you checked our extensive FAQ?
>
>                    http://www.postgresql.org/docs/faq
>
>
>
>
> --
> Jonah H. Harris, Database Internals Architect
> EnterpriseDB Corporation
> 732.331.1324


Re: MySQL million tables

From
"Jonah H. Harris"
Date:
On 3/9/06, Stefan 'Kaishakunin' Schumacher <stefan@net-tex.de> wrote:
Also sprach Christopher Kings-Lynne (chriskl@familyhealth.com.au)
> I see in this post on planetmysql.org that this guy got MySQL to die
> after creating 250k tables.  Anyone want to see how far PostgreSQL can
> go? :)
>
> http://bobfield.blogspot.com/2006/03/million-tables.html
>
> Also, can we beat his estimate of 27 hours for creation?

for i in `seq 1 1000000`;
do
echo "create table test$i (ts timestamp);"|psql tabletorture;
done

My results were based on 500 CREATE TABLE's per transaction, so my results aren't comparable either.


--
Jonah H. Harris, Database Internals Architect
EnterpriseDB Corporation
732.331.1324

Re: MySQL million tables

From
Jean-Paul Argudo
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi all,

I dont think this kind of test could have any meaning, given that table
creation time doesnt import in any way.

... but I was curious, so I made the test :)

>>Also, can we beat his estimate of 27 hours for creation?

We would need his complete machine specs to be sure we can beat him.

> for i in `seq 1 1000000`;
> do
> echo "create table test$i (ts timestamp);"|psql tabletorture;
> done

Just tried that script (still running) on a Dell PowerEdge 2800 (not my
prefered, but only one I could use for that test...)

2 physical CPUS in ht:  Intel(R) Xeon(TM) CPU 3.00GHz with 2m cache
2Gb ram

Builtin Perc4, with Raid 5 and 5+1 10Krpm disks

Linux ***** 2.4.27-2-686-smp #1 SMP Wed Aug 17 10:05:21 UTC 2005 i686
GNU/Linux

I can send more details if you want (postgresql.conf..)

> Now I got ~2600 tables in 4 minutes, so it might take a day to get the
> million. However, the MySQL-guy has no statements about his
> test-machine, so it's not comparable.

I got a rate of 12 tables created per second (fluctuating from 10 to 13
so thats an approximation I do, Ill get final results on the end).

So at the moment I'll have 1M table in about 23 hours on that server.

I'll send complete results when it will be finished.

- --
Jean-Paul Argudo
www.PostgreSQLFr.org
www.dalibo.com
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFED+9anDfYiZOkHKQRAgkSAJ0SUM5GfouYbushribtHPzGtfWBPQCffWNc
4vwrAr9z8p+8AAqL2ndFlP4=
=bX5g
-----END PGP SIGNATURE-----

Re: MySQL million tables

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


I kicked this off last night before bed. It ran much quicker than
I thought, due to that 27 hour estimate.

Total time: 23 minutes 29 seconds :)

I committed every 2000 table commits. It could probably be made
to go slightly faster, as this was an out-of-the-box Postgres database,
with no postgresql.conf tweaking. I simply piped a text file into
psql for the testing, from the outout of a quick perl script:

my $max = 1_000_000;
my $check = 2_000;
print "BEGIN;\n";
for my $num (1..$max) {
  print "CREATE TABLE foo$num (a smallint);\n";
  $num%$check or print "COMMIT;\nBEGIN;\n";
}
print "COMMIT;\n";

And the proof:

greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
  count
- ---------
 1000000

Maybe I'll see just how far PG *can* go next. Time to make a PlanetPG post,
at any rate.

- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200603090720
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iD8DBQFEEB2yvJuQZxSWSsgRAvUbAKCzO80prZ4DX4l3iT0Dh+Re5M4TpACfW95z
y4cdkQjw2ubAP4btMwSw5iw=
=Ginx
-----END PGP SIGNATURE-----



Re: MySQL million tables

From
Christopher Browne
Date:
A long time ago, in a galaxy far, far away, greg@turnstep.com ("Greg Sabino Mullane") wrote:
> I kicked this off last night before bed. It ran much quicker than
> I thought, due to that 27 hour estimate.
>
> Total time: 23 minutes 29 seconds :)

I'm jealous.  I've got the very same thing running on some Supposedly
Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
seconds.

While it's running, the time estimate is...

  select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
  from pg_class where relkind='r' and relname ~ 'foo');

That pretty quickly converged to 31:0?...

> Maybe I'll see just how far PG *can* go next. Time to make a
> PlanetPG post, at any rate.

Another interesting approach to it would be to break this into several
streams.

There ought to be some parallelism to be gained, on systems with
multiple disks and CPUs, by having 1..100000 go in parallel to 100001
to 200000, and so forth, for (oh, say) 10 streams.  Perhaps it's
irrelevant parallelism; knowing that it helps/hurts would be nice...
--
(format nil "~S@~S" "cbbrowne" "cbbrowne.com")
http://linuxfinances.info/info/rdbms.html
Where do you want to Tell Microsoft To Go Today?

Re: MySQL million tables

From
Christopher Kings-Lynne
Date:
> greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
>   count
> - ---------
>  1000000
>
> Maybe I'll see just how far PG *can* go next. Time to make a PlanetPG post,
> at any rate.


Try \dt :D

Chris


Re: MySQL million tables

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


>> greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
>>   count
>> - ---------
>>  1000000

> Try \dt :D

Sure. Took 42 seconds, but it showed up just fine. :)

          List of relations
 Schema |   Name    | Type  | Owner
- --------+-----------+-------+-------
 public | foo1      | table | greg
 public | foo10     | table | greg
 public | foo100    | table | greg
 public | foo1000   | table | greg
 public | foo10000  | table | greg
 public | foo100000 | table | greg
 public | foo100001 | table | greg
 public | foo100002 | table | greg
 public | foo100003 | table | greg

etc...


- --
Greg Sabino Mullane greg@turnstep.com
PGP Key: 0x14964AC8 200603100913
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8

-----BEGIN PGP SIGNATURE-----

iD8DBQFEEYpuvJuQZxSWSsgRAnXFAKCQD31fIXDvN/2lLl9Unaw0zvAdcgCgkHxh
WsrUkThL+xYz6bdzvZ5jqA4=
=4TZu
-----END PGP SIGNATURE-----



Re: MySQL million tables

From
Richard Huxton
Date:
Greg Sabino Mullane wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
>
>>> greg=# select count(*) from pg_class where relkind='r' and relname ~ 'foo'
>>>   count
>>> - ---------
>>>  1000000
>
>> Try \dt :D
>
> Sure. Took 42 seconds, but it showed up just fine. :)

The real test is to put a few rows into each, join the lot and see how
long it takes geqo to plan it.

Actually, if that works the real test is to find a use for this :-)

--
   Richard Huxton
   Archonet Ltd

Re: MySQL million tables

From
Jim Nasby
Date:
I can't believe y'all are burning cycles on this. :P

On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:

> A long time ago, in a galaxy far, far away, greg@turnstep.com
> ("Greg Sabino Mullane") wrote:
>> I kicked this off last night before bed. It ran much quicker than
>> I thought, due to that 27 hour estimate.
>>
>> Total time: 23 minutes 29 seconds :)
>
> I'm jealous.  I've got the very same thing running on some Supposedly
> Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> seconds.
>
> While it's running, the time estimate is...
>
>   select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
>   from pg_class where relkind='r' and relname ~ 'foo');
>
> That pretty quickly converged to 31:0?...
>
>> Maybe I'll see just how far PG *can* go next. Time to make a
>> PlanetPG post, at any rate.
>
> Another interesting approach to it would be to break this into several
> streams.
>
> There ought to be some parallelism to be gained, on systems with
> multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> to 200000, and so forth, for (oh, say) 10 streams.  Perhaps it's
> irrelevant parallelism; knowing that it helps/hurts would be nice...
> --
> (format nil "~S@~S" "cbbrowne" "cbbrowne.com")
> http://linuxfinances.info/info/rdbms.html
> Where do you want to Tell Microsoft To Go Today?
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
>                http://archives.postgresql.org
>

--
Jim C. Nasby, Database Architect                decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"



Re: MySQL million tables

From
"Guido Barosio"
Date:
Well,

    This is a WTF case, but year ago, a request arrived from the Develociraptors to the DBA team.

   Their need was a 2 terabyte db [with a particular need, continue]. They did benchmark on both mysql and postgresql, and believe me, it was funny, cause the DBA team refuse to support the idea, and left the funny und wild developmentiraptors on their own.

   The result? A script creating more o less 40.000 (oh yeah, like the foo$i one) tables on a mysql db and making it almost unable to be browsed, but live, currently in a beta stage, but freezed as the lack of support. (without DBA's support, again)

   Lovely! But you never know with these things, you neeever know.

note: I've created 250k tables in 63 minutes using the perl script from a previous post, on my own workstation. (RH3, short on ram, average CPU, ebay.com used and shipped from the north to the south crappy drive)

g.-

  
On 3/11/06, Jim Nasby < jim@nasby.net> wrote:
I can't believe y'all are burning cycles on this. :P

On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:

> A long time ago, in a galaxy far, far away, greg@turnstep.com
> ("Greg Sabino Mullane") wrote:
>> I kicked this off last night before bed. It ran much quicker than
>> I thought, due to that 27 hour estimate.
>>
>> Total time: 23 minutes 29 seconds :)
>
> I'm jealous.  I've got the very same thing running on some Supposedly
> Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> seconds.
>
> While it's running, the time estimate is...
>
>   select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
>   from pg_class where relkind='r' and relname ~ 'foo');
>
> That pretty quickly converged to 31:0?...
>
>> Maybe I'll see just how far PG *can* go next. Time to make a
>> PlanetPG post, at any rate.
>
> Another interesting approach to it would be to break this into several
> streams.
>
> There ought to be some parallelism to be gained, on systems with
> multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> to 200000, and so forth, for (oh, say) 10 streams.  Perhaps it's
> irrelevant parallelism; knowing that it helps/hurts would be nice...
> --
> (format nil "~S@~S" "cbbrowne" "cbbrowne.com")
> http://linuxfinances.info/info/rdbms.html
> Where do you want to Tell Microsoft To Go Today?
>
> ---------------------------(end of
> broadcast)---------------------------
> TIP 4: Have you searched our list archives?
>
>                http://archives.postgresql.org
>

--
Jim C. Nasby, Database Architect                decibel@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828

Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"



---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq



--
/"\   ASCII Ribbon Campaign  .
\ / - NO HTML/RTF in e-mail  .
X  - NO Word docs in e-mail .
/ \ -----------------------------------------------------------------

Re: MySQL million tables

From
"Joshua D. Drake"
Date:
Jim Nasby wrote:
> I can't believe y'all are burning cycles on this. :P
Your kidding right? Have you seen the discussions that happen this list? ;)

Joshua D. Drake


Re: MySQL million tables

From
Robert Treat
Date:
I can't believe the mysql guys found this to be non-trivial.

http://bitbybit.dk/carsten/blog/?p=83
http://www.flamingspork.com/blog/2006/03/09/a-million-tables/

On Friday 10 March 2006 19:17, Jim Nasby wrote:
> I can't believe y'all are burning cycles on this. :P
>
> On Mar 9, 2006, at 8:04 AM, Christopher Browne wrote:
> > A long time ago, in a galaxy far, far away, greg@turnstep.com
> >
> > ("Greg Sabino Mullane") wrote:
> >> I kicked this off last night before bed. It ran much quicker than
> >> I thought, due to that 27 hour estimate.
> >>
> >> Total time: 23 minutes 29 seconds :)
> >
> > I'm jealous.  I've got the very same thing running on some Supposedly
> > Pretty Fast Hardware, and it's cruising towards 31 minutes plus a few
> > seconds.
> >
> > While it's running, the time estimate is...
> >
> >   select (now() - '2006-03-09 13:47:49') * 1000000 / (select count(*)
> >   from pg_class where relkind='r' and relname ~ 'foo');
> >
> > That pretty quickly converged to 31:0?...
> >
> >> Maybe I'll see just how far PG *can* go next. Time to make a
> >> PlanetPG post, at any rate.
> >
> > Another interesting approach to it would be to break this into several
> > streams.
> >
> > There ought to be some parallelism to be gained, on systems with
> > multiple disks and CPUs, by having 1..100000 go in parallel to 100001
> > to 200000, and so forth, for (oh, say) 10 streams.  Perhaps it's
> > irrelevant parallelism; knowing that it helps/hurts would be nice...
> > --
> > (format nil "~S@~S" "cbbrowne" "cbbrowne.com")
> > http://linuxfinances.info/info/rdbms.html
> > Where do you want to Tell Microsoft To Go Today?
> >
> > ---------------------------(end of
> > broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> >                http://archives.postgresql.org
>
> --
> Jim C. Nasby, Database Architect                decibel@decibel.org
> Give your computer some brain candy! www.distributed.net Team #1828
>
> Windows: "Where do you want to go today?"
> Linux: "Where do you want to go tomorrow?"
> FreeBSD: "Are you guys coming, or what?"
>
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faq

--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL

Re: MySQL million tables

From
"Jim C. Nasby"
Date:
On Fri, Mar 10, 2006 at 08:34:45PM -0500, Robert Treat wrote:
> I can't believe the mysql guys found this to be non-trivial.
>
> http://bitbybit.dk/carsten/blog/?p=83
> http://www.flamingspork.com/blog/2006/03/09/a-million-tables/

So did anyone complete the million table test? We have anything to post?
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

Re: MySQL million tables

From
Chris
Date:
Jim C. Nasby wrote:
> On Fri, Mar 10, 2006 at 08:34:45PM -0500, Robert Treat wrote:
>
>>I can't believe the mysql guys found this to be non-trivial.
>>
>>http://bitbybit.dk/carsten/blog/?p=83
>>http://www.flamingspork.com/blog/2006/03/09/a-million-tables/
>
>
> So did anyone complete the million table test? We have anything to post?

Already done:

http://people.planetpostgresql.org/greg/index.php?/archives/37-The-million-table-challenge.html


--
Postgresql & php tutorials
http://www.designmagick.com/

Re: MySQL million tables

From
ellis@spinics.net (Rick Ellis)
Date:
In article <200603102034.45797.xzilla@users.sourceforge.net>,
Robert Treat <xzilla@users.sourceforge.net> wrote:

>I can't believe the mysql guys found this to be non-trivial.

I can ;)

--
http://yosemitenews.info/