Thread: Re: [GENERAL] Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]

Bruce Momjian:
   I am a begineer,The question is PgSQL support the full entrity integrity
and refernece integerity.For example.does it support "Restricted Delete、NULLIFIES-delete,default-delete....",I read
yourbook,But can not find detail.Where to find? 


>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

               lilixin@cqu.edu.cn
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

                    致
礼!
            李立新     lilixin@cqu.edu.cn


I'm guessing you are asking for support of referential indegrity constraints. It exists in Bruce's book under
http://www.ca.postgresql.org/docs/aw_pgsql_book/node131.html(ON DELETE NO ACTION/SET NULL/SET DEFAULT) 

cheers,
thalis


On Wed, 20 Jun 2001, [ISO-8859-1] ������ wrote:

> Bruce Momjian:
>    I am a begineer,The question is PgSQL support the full entrity integrity
> and refernece integerity.For example.does it support "Restricted Delete��NULLIFIES-delete,default-delete....",I read
yourbook,But can not find detail.Where to find? 
>
>
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: you can get off all lists at once with the unregister command
> >    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>                lilixin@cqu.edu.cn
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: you can get off all lists at once with the unregister command
> >    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>                     ��
> ����
>             ������     lilixin@cqu.edu.cn
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>


2 gig file size limit

From
Naomi Walker
Date:
If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100  ext 242


Re: 2 gig file size limit

From
Larry Rosenman
Date:
* Naomi Walker <nwalker@eldocomp.com> [010706 17:57]:
> If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> might cause us to hit the limit?
PostgreSQL is smart, and breaks the table files up at ~1GB per each,
so it's transparent to you.

You shouldn't have to worry about it.
LER

> --
> Naomi Walker
> Chief Information Officer
> Eldorado Computing, Inc.
> 602-604-3100  ext 242
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>

--
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

Re: 2 gig file size limit

From
Lamar Owen
Date:
On Friday 06 July 2001 18:51, Naomi Walker wrote:
> If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> might cause us to hit the limit?

Since PostgreSQL automatically segments its internal data files to get around
such limits, the only place you will hit this limit will be when making
backups using pg_dump or pg_dumpall.  You may need to pipe the output of
those commands into a file splitting utility, and then you'll have to pipe
through a reassembly utility to restore.
--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

Re: Backups WAS: 2 gig file size limit

From
Joseph Shraibman
Date:
Lamar Owen wrote:
>
> On Friday 06 July 2001 18:51, Naomi Walker wrote:
> > If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> > might cause us to hit the limit?
>
> Since PostgreSQL automatically segments its internal data files to get around
> such limits, the only place you will hit this limit will be when making
> backups using pg_dump or pg_dumpall.  You may need to pipe the output of

Speaking of which.

Doing a dumpall for a backup is taking a long time, the a restore from
the dump files doesn't leave the database in its original state.  Could
a command be added that locks all the files, quickly tars them up, then
releases the lock?

--
Joseph Shraibman
jks@selectacast.net
Increase signal to noise ratio.  http://www.targabot.com

Re: [GENERAL] 2 gig file size limit

From
"Neil Conway"
Date:
(This question was answered several days ago on this list; please check
the list archives before posting. I believe it's also in the FAQ.)

> If PostgreSQL is run on a system that has a file size limit (2
> gig?), where  might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes,
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg
to another source, such as if you pg_dump the contents of a database >
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
size of the dump. If that's still not enough, you can dump individual
tables (with -t) or use 'split' to divide the dump into several files.

Cheers,

Neil