Thread: PostgreSQL limitations question
Hi All
I found PG limitations (http://www.postgresql.org/about/):
- Maximum Rows per Table - Unlimited
- Maximum Table Size - 32 TB
My question is:
how is it possible to *reach* unlimited rows in table?
I did a test:
1. Create Table:
CREATE TABLE test.limits("RowValue" text) WITH (OIDS=FALSE, FILLFACTOR=100);
2. Fill table (I used pgScript available in pgAdmin);
DECLARE @I;
SET @I = 0;
WHILE @I < 1000
BEGIN
INSERT INTO test.limits ("RowValue") VALUES (NULL);
SET @I = @I + 1;
END
3. do Vacuum full to be sure free space is removed
VACUUM FULL test.limits;
4. I checked table size:
SELECT * FROM pg_size_pretty(pg_relation_size('test.limits'::regclass));
and I realized table size is 32 kB.
I used pgstattupet extension (http://www.postgresql.org/docs/9.1/static/pgstattuple.html) to check what is going on:
SELECT * FROM pgstattuple('test.limits');
and I got:
table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent |
32768 | 1000 | 24000 | 73.24 | 0 | 0 | 0 | 4608 | 14.06 |
Did I missed something?
Is there a non storage cost data type?
I know that "storage requirement for a short string (up to 126 bytes) is 1 byte plus the actual string" (http://www.postgresql.org/docs/9.1/static/datatype-character.html).
Regards,
Bartek
On 07/12/2012 05:01 AM, Bartosz Dmytrak wrote: > 1. Create Table: > CREATE TABLE test.limits("RowValue" text) WITH (OIDS=FALSE, > FILLFACTOR=100); > > 2. Fill table (I used pgScript available in pgAdmin); I suspect that's a pretty slow way to try to fill your DB up. You're doing individual INSERTs and possibly in individual transactions (unsure, I don't use PgAdmin); it's not going to be fast. Try COPYing rows in using psql. I'd do it in batches via shell script loop myself. Alternately, you could use the COPY support of the DB drivers in perl or Python to do it. > 3. do Vacuum full to be sure free space is removed > VACUUM FULL test.limits; Which version of Pg are you running? If it's older than 9.0 you're possibly better off using "CLUSTER" instead of "VACUUM FULL". > 4. I checked table size: > SELECT * FROM pg_size_pretty(pg_relation_size('test.limits'::regclass)); > and I realized table size is 32 kB. Use pg_total_relation_size to include TOAST tables too. -- Craig Ringer
2012/7/12 Craig Ringer <ringerc@ringerc.id.au>
I suspect that's a pretty slow way to try to fill your DB up. You're doing individual INSERTs and possibly in individual transactions (unsure, I don't use PgAdmin); it's not going to be fast.
Try COPYing rows in using psql. I'd do it in batches via shell script loop myself. Alternately, you could use the COPY support of the DB drivers in perl or Python to do it.
this time it doesn't matter - agree COPY is better, this is only one time
Which version of Pg are you running? If it's older than 9.0 you're possibly better off using "CLUSTER" instead of "VACUUM FULL".3. do Vacuum full to be sure free space is removed
VACUUM FULL test.limits;
I am sorry - 9.1.4
Use pg_total_relation_size to include TOAST tables too.
it doesn't metter - conclusion is: table is growing. You are right, for other purposes it should be better to check total size.
Regards,
Bartek
On 07/12/2012 02:16 PM, Bartosz Dmytrak wrote:
it doesn't metter - conclusion is: table is growing. You are right, for other purposes it should be better to check total size.
In that case, I'm not sure I understand what you were actually asking in your initial question.
--
Craig Ringer
On 07/12/2012 12:39 AM, Craig Ringer wrote: > On 07/12/2012 02:16 PM, Bartosz Dmytrak wrote: >> >> it doesn't metter - conclusion is: table is growing. You are right, >> for other purposes it should be better to check total size. >> > > In that case, I'm not sure I understand what you were actually asking in > your initial question. I understood it to be asking about the conflict between the two statements below: Maximum Table Size 32 TB Maximum Rows per Table Unlimited If a table has a maximum size and rows have size then at some point you will reach a limit on number of rows per table. > > -- > Craig Ringer -- Adrian Klaver adrian.klaver@gmail.com
Adrian Klaver <adrian.klaver@gmail.com> writes: > On 07/12/2012 12:39 AM, Craig Ringer wrote: >> In that case, I'm not sure I understand what you were actually asking in >> your initial question. > I understood it to be asking about the conflict between the two > statements below: > Maximum Table Size 32 TB > Maximum Rows per Table Unlimited > If a table has a maximum size and rows have size then at some point you > will reach a limit on number of rows per table. I think the "unlimited" should be read as "you'll hit some other limit first". For example, I trust no one would read that line as implying that we can store more data than will fit on the machine's disks. In the same way, it's not meant to suggest that the number of rows isn't effectively limited by the max table size. We could perhaps replace "unlimited" by the result of dividing the max table size by the minimum row size. I'm not sure that would be particularly helpful though, since most tables are probably a good deal wider than the minimum row size, and so the effective limit would be quite a bit less. regards, tom lane
On 07/12/2012 06:44 AM, Tom Lane wrote: > Adrian Klaver <adrian.klaver@gmail.com> writes: >> On 07/12/2012 12:39 AM, Craig Ringer wrote: >>> In that case, I'm not sure I understand what you were actually asking in >>> your initial question. > >> I understood it to be asking about the conflict between the two >> statements below: > >> Maximum Table Size 32 TB >> Maximum Rows per Table Unlimited > >> If a table has a maximum size and rows have size then at some point you >> will reach a limit on number of rows per table. > > I think the "unlimited" should be read as "you'll hit some other limit > first". For example, I trust no one would read that line as implying > that we can store more data than will fit on the machine's disks. > In the same way, it's not meant to suggest that the number of rows isn't > effectively limited by the max table size. I would agree, but the OPs question was: " My question is: how is it possible to *reach* unlimited rows in table? " > > We could perhaps replace "unlimited" by the result of dividing the max > table size by the minimum row size. I'm not sure that would be > particularly helpful though, since most tables are probably a good deal > wider than the minimum row size, and so the effective limit would be > quite a bit less. > > regards, tom lane > > -- Adrian Klaver adrian.klaver@gmail.com
On Jul 12, 2012, at 9:44, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > We could perhaps replace "unlimited" by the result of dividing the max > table size by the minimum row size. I'm not sure that would be > particularly helpful though, since most tables are probably a good deal > wider than the minimum row size, and so the effective limit would be > quite a bit less. > > regards, tom lane > How about saying: "No Fixed Limit - see Table Size" There is a semantic difference between being limited by the file-system (thus internally unlimited) or being limited by aninternal constraint (table size). Pointing out the implication that a maximum table size necessarily limits the maximumnumber of rows stored benefits a very small fraction of the audience but it doesn't cause any harm to the remainderand doesn't cost much to implement. You could also provide a range: 20 to millions+; based on the max row size of 1.2TB and whatever the minimum size would result in. David J.
2012/7/12 David Johnston <polobo@yahoo.com>
How about saying: "No Fixed Limit - see Table Size"
I am sorry for delay. My intention was to start discussion about unlimited number of rows.
I like this idea: "No Fixed Limit - see Table Size"
Another, maybe only academic, discussion is about maximum number of indexes per table. Reason is the same. Indexes are stored in table pg_class (relkind = 'i'), so when we agree number of table rows is limited, then number of indexes is limited too.
There is fair sentence for number of columns - "depending on column type".
I think there should be an explanation what *unlimited* really means.
Thanks for Your attention.
Regards,
Bartek
On Fri, Jul 13, 2012 at 4:14 AM, Bartosz Dmytrak <bdmytrak@gmail.com> wrote: > I think there should be an explanation what *unlimited* really means. > Unlimited doesn't mean you can have infinite of something, but just that that specific attribute doesn't have its own limit. For instance, if I fly to England, I am allowed to carry as much sheet music with me as I like ("unlimited"), but there are weight limits on my total baggage. In theory, I could calculate exactly how many bars of music I could fit, at the tightest font size I can reasonably use, on the lightest paper, etc, etc, and thus figure out that the airline limits me to X amount of sheet music; but there's no actual limit on sheet music. Does that help? Chris Angelico
On Fri, Jul 13, 2012 at 12:05 AM, Chris Angelico <rosuav@gmail.com> wrote: > On Fri, Jul 13, 2012 at 4:14 AM, Bartosz Dmytrak <bdmytrak@gmail.com> wrote: >> I think there should be an explanation what *unlimited* really means. >> > > Unlimited doesn't mean you can have infinite of something, but just > that that specific attribute doesn't have its own limit. For instance, > if I fly to England, I am allowed to carry as much sheet music with me > as I like ("unlimited"), but there are weight limits on my total > baggage. In theory, I could calculate exactly how many bars of music I > could fit, at the tightest font size I can reasonably use, on the > lightest paper, etc, etc, and thus figure out that the airline limits > me to X amount of sheet music; but there's no actual limit on sheet > music. Wait, are you suggesting Olive Garden doesn't *actually* offer unlimited breadsticks?
On Sat, Jul 14, 2012 at 2:40 AM, Mike Christensen <mike@kitchenpc.com> wrote: > Wait, are you suggesting Olive Garden doesn't *actually* offer > unlimited breadsticks? I'm not American, and have only been to Olive Garden once (visited your country and tried to cram way way too much into not nearly enough time), so I don't know the reference. But a non-infinite number of breadsticks calls this to mind: http://www.youtube.com/watch?v=QXLL8qFC5BY ChrisA
2012/7/13 Chris Angelico <rosuav@gmail.com>
Does that help?
Sure :)
I know what unlimited means, but I suggest to change docs to be more accurate.
Those "limits" are huge (e.g. number of indexes limited by pg_class table size), but still exists.
it is like the famous Henry Ford's color choose:
"Any customer can have a car painted any color that he wants so long as it is black."
Number of indexes is unlimited until it is limited by pg_class table size (regardless free HD space).
Regards,
Bartek
On Sun, Jul 15, 2012 at 08:54:53PM +0200, Bartosz Dmytrak wrote: > 2012/7/13 Chris Angelico <rosuav@gmail.com> > > > Does that help? > > > Sure :) > I know what unlimited means, but I suggest to change docs to be more accurate. > Those "limits" are huge (e.g. number of indexes limited by pg_class table > size), but still exists. > > it is like the famous Henry Ford's color choose: > "Any customer can have a car painted any color that he wants so long as it is > black." > Number of indexes is unlimited until it is limited by pg_class table size > (regardless free HD space). What is the pg_class table size limit then? Is that really helping anyone? -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
2012/7/26 Bruce Momjian <bruce@momjian.us>
What is the pg_class table size limit then? Is that really helping
anyone?
Fist of all - thanks for Your attentions, I really appreciate it.
is that helping? - as it has been mentioned before: a small audience has noticed that fact, so probably not. I think it is a matter of description quality, not real limitations - they are huge, but... exists.
Regards,
Bartek
On Thu, Jul 12, 2012 at 8:53 AM, Adrian Klaver <adrian.klaver@gmail.com> wrote: > On 07/12/2012 06:44 AM, Tom Lane wrote: >> >> Adrian Klaver <adrian.klaver@gmail.com> writes: >>> >>> If a table has a maximum size and rows have size then at some point you >>> will reach a limit on number of rows per table. >> >> >> I think the "unlimited" should be read as "you'll hit some other limit >> first". For example, I trust no one would read that line as implying >> that we can store more data than will fit on the machine's disks. >> In the same way, it's not meant to suggest that the number of rows isn't >> effectively limited by the max table size. > > > I would agree, but the OPs question was: > > " > My question is: > how is it possible to *reach* unlimited rows in table? > " > and then you can have "partitioned" tables, while the system consider them almost independent tables (they are dependent only in the sense that if you read the parent it will read the childs too) but for the user they will look as one single table. so even say see limited by table size is not that true from certain point of view maybe: limited by other constraints (or some other words to say that) -- Jaime Casanova www.2ndQuadrant.com Professional PostgreSQL: Soporte 24x7 y capacitación