Thread: 8Ko limitation

8Ko limitation

From
"Xavier ZIMMERMANN"
Date:


Hi,

I am thinking about using postgreSQL to manage large geographic databases
connected to a GIS.
I would really appreciate if someone could give me some answers to those 3
questions :
   what about performances with postgreSQL and large databases,
   the object size limitation (8192 bytes) is really not acceptable for this
   purpose. Is there a way or any hack to overpass it,
   is the geographic index working well.

Thanks.

Xavier.




Re: [HACKERS] 8Ko limitation

From
Karel Zak
Date:
>    what about performances with postgreSQL and large databases,
>    the object size limitation (8192 bytes) is really not acceptable for this

 Now you can change this limit in config.h, the possible range is
8Kb - 32Kb.

 In new 7.1 version will this limit dead forever (see TOAST project).

 And what is a "large database"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL
is good choice.

                    Karel


Re: Re: [HACKERS] 8Ko limitation

From
Stephane Bortzmeyer
Date:
On Thursday 20 July 2000, at 10 h 0, the keyboard of Karel Zak
<zakkr@zf.jcu.cz> wrote:

>  And what is a "large database"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL
> is good choice.

Even on Linux? I'm studying a database project where the raw data is 10 to 20
Gb (it will be in several tables in the same database). Linux has a limit of 2
Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me
to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more
than 2 Gb per database. Any practical experience? (I'm not interested in "It
should work".)



Re: Re: [HACKERS] 8Ko limitation

From
Jules Bean
Date:
On Thu, Jul 20, 2000 at 10:35:41AM +0200, Stephane Bortzmeyer wrote:
> On Thursday 20 July 2000, at 10 h 0, the keyboard of Karel Zak
> <zakkr@zf.jcu.cz> wrote:
>
> >  And what is a "large database"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL
> > is good choice.
>
> Even on Linux? I'm studying a database project where the raw data is 10 to 20
> Gb (it will be in several tables in the same database). Linux has a limit of 2
> Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me
> to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more
> than 2 Gb per database. Any practical experience? (I'm not interested in "It
> should work".)

Postgres splits large tables into multiple files.

Experience suggests it tends to split at around 1.1G (at least, that's
what it has done on my last project).

FWIW, the 2Gig limit doesn't exist on 64bit linux, AFAIK (at least, not
with a 64-bit happy libc; I can't remember if the patches made it into
the version we use in Debian).

Jules

--
Jules Bean                          |        Any sufficiently advanced
jules@debian.org                    |  technology is indistinguishable
jules@jellybean.co.uk               |               from a perl script

Re: Re: [HACKERS] 8Ko limitation

From
Karel Zak
Date:
On Thu, 20 Jul 2000, Stephane Bortzmeyer wrote:

> On Thursday 20 July 2000, at 10 h 0, the keyboard of Karel Zak
> <zakkr@zf.jcu.cz> wrote:
>
> >  And what is a "large database"? 1, 5 .. 10Gb? If yes, (IMHO) the PostgreSQL
> > is good choice.
>
> Even on Linux? I'm studying a database project where the raw data is 10 to 20
> Gb (it will be in several tables in the same database). Linux has a limit of 2
> Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me
> to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more
> than 2 Gb per database. Any practical experience? (I'm not interested in "It
> should work".)

I must again say: "The PostgreSQL is good choice" :-)

The postgres chunks DB files, not exist 2Gb limit here...

                        Karel


Re: Re: [HACKERS] 8Ko limitation

From
Tom Lane
Date:
Jules Bean <jules@jellybean.co.uk> writes:
>> A colleague told me to use NetBSD instead, because PostgreSQL on a
>> Linux machine cannot host more than 2 Gb per database. Any practical
>> experience? (I'm not interested in "It should work".)

> Postgres splits large tables into multiple files.

Segmenting into multiple files used to have some bugs, but that was a
few versions back --- I think your colleague's experience is obsolete.
There are lots of people using multi-gig tables now.

It's presently still painful to manage a database that spans multiple
disks, however.  (You can do it if you're willing to move files around
and establish symlinks by hand ... but it's painful.)  There are plans
to make this better, but for now you might want to say that the
practical limit is the size of disk you can buy.  Alternatively, if
your OS can make logical filesystems that span multiple disks, you
can get around the problem that way.

            regards, tom lane

Re: [HACKERS] 8Ko limitation

From
Tom Lane
Date:
"Justin Hickey" <jhickey@impact1.hpcc.nectec.or.th> writes:
>  Will the geometric data types be TOASTable for 7.1?

Probably ... if I get around to it ... or someone else does
(yes, that's a hint).

            regards, tom lane

Re: Re: [HACKERS] 8Ko limitation

From
Brook Milligan
Date:
   Even on Linux? I'm studying a database project where the raw data is 10 to 20
   Gb (it will be in several tables in the same database). Linux has a limit of 2
   Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me
   to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more
   than 2 Gb per database. Any practical experience? (I'm not interested in "It
   should work".)

Postgresql and NetBSD work fine together.  NetBSD has not had a 2GB
file limit for _many_ years and has raidframe for configuring huge
disks from many small ones (as well as for normal raid stuff).

Cheers,
Brook



Re: Re: [HACKERS] 8Ko limitation

From
Erich
Date:
> Even on Linux? I'm studying a database project where the raw data is 10 to 20
> Gb (it will be in several tables in the same database). Linux has a limit of 2
> Gb for a file (even on 64-bits machine, if I'm correct). A colleague told me

Quoi?

On my RedHat6.2 system:

/dev/md0              14111856    257828  13137168   2% /raid

> to use NetBSD instead, because PostgreSQL on a Linux machine cannot host more
> than 2 Gb per database. Any practical experience? (I'm not interested in "It
> should work".)

For a heavy-duty server, I would probably pick OpenBSD over Linux, but
both will work fine, and both can have filesystems far larger than
2gb.

e

Re: Re: [HACKERS] 8Ko limitation

From
Stephane Bortzmeyer
Date:
On Thursday 20 July 2000, at 14 h 5, the keyboard of Erich <hh@cyberpass.net>
wrote:

> > Linux has a limit of 2
> > Gb for a file (even on 64-bits machine, if I'm correct).
...
> Quoi?
>
> On my RedHat6.2 system:
>
> /dev/md0              14111856    257828  13137168   2% /raid
...
> and both can have filesystems far larger than 2gb.

Read the message before replying: I wrote FILE and not FILESYSTEM.



Re: [HACKERS] 8Ko limitation

From
Hannu Krosing
Date:
Xavier ZIMMERMANN wrote:
>
> Hi,
>
> I am thinking about using postgreSQL to manage large geographic databases
> connected to a GIS.
...
>    is the geographic index working well.

AFAIK r-trees ar used for planar geometry.
perhaps there something in contrib for geographic (spherical)
coordinates.

--------
Hannu