Thread: RE: [INTERFACES] Postgres Limitations

RE: [INTERFACES] Postgres Limitations

From
"Jackson, DeJuan"
Date:
It is currently unclear as to what will happen when you table reaches 2G
of storage on most file systems.  I think that >2G table handling got
broken somehow.
The max tuple(row) size is 8K including overhead.

Hope this helps,
    DEJ

> -----Original Message-----
> From: Jim Carroll [mailto:jim@carroll.com]
> Sent: Tuesday, February 02, 1999 9:10 AM
> To: pgsql-interfaces@postgreSQL.org
> Subject: [INTERFACES] Postgres Limitations
>
>
>
>  Could someone point  me  to  a  document  that  lists  the
> limitations  of
>  PostgreSQL  ?  I am specifically interested in limitations
> on the number of
>  rows that can be present in any one table.
>
>  Thanks
>
> ---
> Jim C., President       | C A R R O L L - N E T, Inc.
> 201-488-1332            | New Jersey's Premier Internet
> Service Provider
> www.carroll.com         |
>                         | Want to grow your business and at the same
>                         | time, decrease costs?  Ask about the
> www.message-server.com  | Carroll-Net Message Server.
>
>

RE: [INTERFACES] Postgres Limitations

From
Jim Carroll
Date:
On Tue, 2 Feb 1999, Jackson, DeJuan wrote:

> Date: Tue, 2 Feb 1999 14:29:53 -0600
> From: Jackson, DeJuan <djackson@cpsgroup.com>
> To: Jim Carroll <jim@carroll.com>, pgsql-interfaces@postgreSQL.org
> Subject: RE: [INTERFACES] Postgres Limitations
>
> It is currently unclear as to what will happen when you table reaches 2G
> of storage on most file systems.  I think that >2G table handling got
> broken somehow.

 I know this is probably a "loaded" question, but  do  have  any  idea  what
 might be the cause of this limitation ? Are there any FAQ's, docs or source
 code references we could follow up to see about solving this problem ?

 We  are  looking  to  create  an  index  for  70  Million records. My quick
 calculations show we will have a single table larger than 15GB.

---
Jim C., President       | C A R R O L L - N E T, Inc.
201-488-1332            | New Jersey's Premier Internet Service Provider
www.carroll.com         |
                        | Want to grow your business and at the same
                        | time, decrease costs?  Ask about the
www.message-server.com  | Carroll-Net Message Server.


Re: [INTERFACES] Postgres Limitations

From
Tom Lane
Date:
Jim Carroll <jim@carroll.com> writes:
>> It is currently unclear as to what will happen when you table reaches 2G
>> of storage on most file systems.  I think that >2G table handling got
>> broken somehow.

>  I know this is probably a "loaded" question, but  do  have  any  idea  what
>  might be the cause of this limitation ?

Postgres does have logic for coping with tables > 2Gb by splitting them
into multiple Unix files.  Peter Mount recently reported that this
feature appears to be broken in the current sources (cf hackers mail
list archive for 25/Jan/99).  I don't think anyone has followed up on
the issue yet.  (I dunno about the other developers, but I don't have a
few Gb of free space to spare so I can't test it...)  You could make a
useful contribution by either determining that the feature does work, or
fixing it if it's busted.  Probably wouldn't be a very complex fix, but
I've never looked at that part of the code.

If your total database will exceed the space available on a single
filesystem on your platform, you will have to play some games with
symbolic links in order to spread the table files across multiple
filesystems.  I don't know of any gotchas in doing that, but it's
kind of a pain for the DB admin to have to do it by hand.

            regards, tom lane

Re: [INTERFACES] Postgres Limitations

From
Peter T Mount
Date:
On Wed, 3 Feb 1999, Tom Lane wrote:

> Jim Carroll <jim@carroll.com> writes:
> >> It is currently unclear as to what will happen when you table reaches 2G
> >> of storage on most file systems.  I think that >2G table handling got
> >> broken somehow.
>
> >  I know this is probably a "loaded" question, but  do  have  any  idea  what
> >  might be the cause of this limitation ?
>
> Postgres does have logic for coping with tables > 2Gb by splitting them
> into multiple Unix files.  Peter Mount recently reported that this
> feature appears to be broken in the current sources (cf hackers mail
> list archive for 25/Jan/99).  I don't think anyone has followed up on
> the issue yet.  (I dunno about the other developers, but I don't have a
> few Gb of free space to spare so I can't test it...)  You could make a
> useful contribution by either determining that the feature does work, or
> fixing it if it's busted.  Probably wouldn't be a very complex fix, but
> I've never looked at that part of the code.

I tested it as I had a few free gig, and although it split the file at
2gig, it wouldn't extend further.

I started browsing the source the other day, and at first it looks ok. I
have a feeling it's something simple, and I'm planning to try it again
this week end.

The problem I have is that it takes 4 hours for a table to reach 2Gb on my
system, so it's a slow process :-(

Peter

--
       Peter T Mount peter@retep.org.uk
      Main Homepage: http://www.retep.org.uk
PostgreSQL JDBC Faq: http://www.retep.org.uk/postgres
 Java PDF Generator: http://www.retep.org.uk/pdf