Thread: Max number of rows in a table

Max number of rows in a table

From
ow
Date:
> --- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Actually you can only have 4 billion SQL commands per xid, because the
> CommandId datatype is also just 32 bits.  I've never heard of anyone
> running into that limit, though.
> 

Wouldn't the above put a limit on a number of records one could have in table?
One have to use pg_dump/pg_restore to be able to upgrade between pgSql releases
and/or to backup data. If one cannot backup data and/or upgrade between pg
releases, then 4B records is the effective limit on the number of records in
pgSql ... or am I missing something?

Thanks






__________________________________
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/


Re: Max number of rows in a table

From
Tom Lane
Date:
ow <oneway_111@yahoo.com> writes:
>> --- Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Actually you can only have 4 billion SQL commands per xid, because the
>> CommandId datatype is also just 32 bits.  I've never heard of anyone
>> running into that limit, though.

> Wouldn't the above put a limit on a number of records one could have
> in table?

No.
        regards, tom lane


Re: Max number of rows in a table

From
ow
Date:
--- Tom Lane <tgl@sss.pgh.pa.us> wrote: 
> > Wouldn't the above put a limit on a number of records one could have
> > in table?
> 
> No.

If I understand correctly, a table that has 4B+ rows cannot be restored after
the dump and that, in turn, may/will affect the ability to upgrade to new
versions of pgSql.

This is somewhat similar to saying "yes" to the question "Can I drive this car
at 55 mph?" and then forgetting to mention that the brakes will fail if the car
reaches 55 mph.

Thanks





__________________________________
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/


Re: Max number of rows in a table

From
Stephan Szabo
Date:
On Mon, 1 Dec 2003, ow wrote:

> > --- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Actually you can only have 4 billion SQL commands per xid, because the
> > CommandId datatype is also just 32 bits.  I've never heard of anyone
> > running into that limit, though.
> >
>
> Wouldn't the above put a limit on a number of records one could have in table?
> One have to use pg_dump/pg_restore to be able to upgrade between pgSql releases
> and/or to backup data. If one cannot backup data and/or upgrade between pg
> releases, then 4B records is the effective limit on the number of records in
> pgSql ... or am I missing something?

I'd expect copy to be a single command, no matter how many rows were
copied.



Re: Max number of rows in a table

From
Greg Stark
Date:
Stephan Szabo <sszabo@megazone.bigpanda.com> writes:

> I'd expect copy to be a single command, no matter how many rows were
> copied.

It might prevent you from using pg_dump --inserts ?

-- 
greg



Re: Max number of rows in a table

From
Tom Lane
Date:
ow <oneway_111@yahoo.com> writes:
> If I understand correctly, a table that has 4B+ rows cannot be restored after
> the dump and that, in turn, may/will affect the ability to upgrade to new
> versions of pgSql.

You don't understand correctly.
        regards, tom lane


Re: Max number of rows in a table

From
ow
Date:
--- Tom Lane <tgl@sss.pgh.pa.us> wrote:
> ow <oneway_111@yahoo.com> writes:
> > If I understand correctly, a table that has 4B+ rows cannot be restored
> after
> > the dump and that, in turn, may/will affect the ability to upgrade to new
> > versions of pgSql.
> 
> You don't understand correctly.

I see. Thanks





__________________________________
Do you Yahoo!?
Free Pop-Up Blocker - Get it now
http://companion.yahoo.com/


Re: Max number of rows in a table

From
Christopher Browne
Date:
Clinging to sanity, oneway_111@yahoo.com (ow) mumbled into her beard:
> --- Tom Lane <tgl@sss.pgh.pa.us> wrote: 
>> > Wouldn't the above put a limit on a number of records one could have
>> > in table?
>> 
>> No.
>
> If I understand correctly, a table that has 4B+ rows cannot be restored after
> the dump and that, in turn, may/will affect the ability to upgrade to new
> versions of pgSql.
>
> This is somewhat similar to saying "yes" to the question "Can I drive this car
> at 55 mph?" and then forgetting to mention that the brakes will fail if the car
> reaches 55 mph.

No, you are NOT understanding correctly.

Restoring a table from pg_dump generally involves _ONE_ command.

For instance, the following is what pg_dump generates for my table, "stocks."

COPY stocks (symbol, description, exchange) FROM stdin;
AADBX   AADBX   NYSE
AADEX   AADEX   NYSE
AAIEX   AAIEX   NYSE
BTS.A   BTS.A   TSX
CTSTK   TD Canadian Equity      CTE
CASH    CASH    TSX
CTAMER  TD AmeriGrowth RSP      CTE
CTASIA  TD AsiaGrowth RSP       CTE
CTEMER  TD Emerging Markets RSP CTE
CTEURO  TD European Growth RSP  CTE
CTIBND  TD Global RSP Bond      CTE
FDIVX   FDIVX   NYSE
FDRXX   FDRXX   NYSE
FUSEX   FUSEX   NYSE
MOT     MOT     NYSE
NCX     NOVA Chemicals Corporation      TSX
NT      NT      NYSE
PCA     Petro Canada    TSX
RY      Royal Bank of Canada    TSX
TOC     Thomson Corporation     TSX
TRP     TransCanada PipeLines Limited   TSX
WORKVE  Working Ventures        OTHER
CTSPEC  TD SPECIAL EQUITY       CTE
CTUSEQ  TD US EQUITY    CTE
CTMM    TD MONEY MARKET PL      CTE
CTCBOND TD Canadian Bond        CTE
\.

Recovery from this involves the SQL processor using ONE transaction
ID, and ONE SQL statement.  If there were 8 billion rows in the table,
whatever other challenges there might be, it would still use ONE
transaction ID and ONE SQL statement.

What is there about "This involves just one SQL statement" that isn't
making sense?
-- 
wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').
http://www.ntlug.org/~cbbrowne/lisp.html
Pagers are cases for holding dead batteries. -Richard Wolff 


Re: Max number of rows in a table

From
Tom Lane
Date:
Greg Stark <gsstark@mit.edu> writes:
> Stephan Szabo <sszabo@megazone.bigpanda.com> writes:
>> I'd expect copy to be a single command, no matter how many rows were
>> copied.

> It might prevent you from using pg_dump --inserts ?

Not even that, unless you *also* modified the dump output to wrap
BEGIN/END around it.  Otherwise each INSERT is a separate xid.

(Of course you could definitely take a pretty long coffee break while
waiting for a 4-billion-row table to be restored with INSERTs.  Also
I think it would be necessary to run VACUUM partway through to avoid
transaction wraparound issues.  pg_autovacuum could be expected to
take care of that for you, if it were running.  But in practice anyone
sane would use COPY for this, anyway.)
        regards, tom lane