Thread: Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]

Bruce Momjian:
   I am a begineer,The question is PgSQL support the full entrity integrity
and refernece integerity.For example.does it support "Restricted Delete、NULLIFIES-delete,default-delete....",I read
yourbook,But can not find detail.Where to find? 


>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

               lilixin@cqu.edu.cn
>---------------------------(end of broadcast)---------------------------
>TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)

                    致
礼!
            李立新     lilixin@cqu.edu.cn


Re: Re: [PATCHES] [PATCH] Contrib C source for casting MONEY to INT[248] and FLOAT[48]

From
"Thalis A. Kalfigopoulos"
Date:
I'm guessing you are asking for support of referential indegrity constraints. It exists in Bruce's book under
http://www.ca.postgresql.org/docs/aw_pgsql_book/node131.html(ON DELETE NO ACTION/SET NULL/SET DEFAULT) 

cheers,
thalis


On Wed, 20 Jun 2001, [ISO-8859-1] ������ wrote:

> Bruce Momjian:
>    I am a begineer,The question is PgSQL support the full entrity integrity
> and refernece integerity.For example.does it support "Restricted Delete��NULLIFIES-delete,default-delete....",I read
yourbook,But can not find detail.Where to find? 
>
>
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: you can get off all lists at once with the unregister command
> >    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>                lilixin@cqu.edu.cn
> >---------------------------(end of broadcast)---------------------------
> >TIP 2: you can get off all lists at once with the unregister command
> >    (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>
>                     ��
> ����
>             ������     lilixin@cqu.edu.cn
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org
>


Re: [HACKERS] 2 gig file size limit

From
Larry Rosenman
Date:
* Naomi Walker <nwalker@eldocomp.com> [010706 17:57]:
> If PostgreSQL is run on a system that has a file size limit (2 gig?), where
> might cause us to hit the limit?
PostgreSQL is smart, and breaks the table files up at ~1GB per each,
so it's transparent to you.

You shouldn't have to worry about it.
LER

> --
> Naomi Walker
> Chief Information Officer
> Eldorado Computing, Inc.
> 602-604-3100  ext 242
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
>

--
Larry Rosenman                     http://www.lerctr.org/~ler
Phone: +1 972-414-9812                 E-Mail: ler@lerctr.org
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749

2 gig file size limit

From
Naomi Walker
Date:
If PostgreSQL is run on a system that has a file size limit (2 gig?), where
might cause us to hit the limit?
--
Naomi Walker
Chief Information Officer
Eldorado Computing, Inc.
602-604-3100  ext 242


Re: 2 gig file size limit

From
"Neil Conway"
Date:
(This question was answered several days ago on this list; please check
the list archives before posting. I believe it's also in the FAQ.)

> If PostgreSQL is run on a system that has a file size limit (2
> gig?), where  might cause us to hit the limit?

Postgres will never internally use files (e.g. for tables, indexes,
etc) larger than 1GB -- at that point, the file is split.

However, you might run into problems when you export the data from Pg
to another source, such as if you pg_dump the contents of a database >
2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
size of the dump. If that's still not enough, you can dump individual
tables (with -t) or use 'split' to divide the dump into several files.

Cheers,

Neil


Re: 2 gig file size limit

From
Bruce Momjian
Date:
> (This question was answered several days ago on this list; please check
> the list archives before posting. I believe it's also in the FAQ.)
>
> > If PostgreSQL is run on a system that has a file size limit (2
> > gig?), where  might cause us to hit the limit?
>
> Postgres will never internally use files (e.g. for tables, indexes,
> etc) larger than 1GB -- at that point, the file is split.
>
> However, you might run into problems when you export the data from Pg
> to another source, such as if you pg_dump the contents of a database >
> 2GB. In that case, filter pg_dump through gzip or bzip2 to reduce the
> size of the dump. If that's still not enough, you can dump individual
> tables (with -t) or use 'split' to divide the dump into several files.

I just added the second part of this sentense to the FAQ to try and make
it more visible:

    The maximum table size of 16TB does not require large file
    support from the operating system. Large tables are stored as
    multiple 1GB files so file system size limits are not important.


--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

Re: 2 gig file size limit

From
markMLl.pgsql-general@telemetry.co.uk
Date:
Can a single database be split over multiple filesystems, or does the
filesystem size under e.g. Linux (whatever it is these days) constrain
the database size?

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]

Re: 2 gig file size limit

From
markMLl.pgsql-general@telemetry.co.uk
Date:
Ian Willis wrote:
>
> Postgresql transparently breaks the db into 1G chunks.

Yes, but presumably these are still in the directory tree that was
created by initdb, i.e. normally on a single filesystem.

> The main concern is during dumps. A 10G db can't be dumped if the
> filesustem has a 2G limit.

Which is why somebody suggested piping into tar or whatever.

> Linus no longer has a filesystem file size limit ( or at least on
> that you'll hit easily)

I'm not concerned with "easily". Telling one of our customers that we
chose a particular server becuase they won't easily hit limits is a
non-starter.

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]

Re: 2 gig file size limit

From
Martijn van Oosterhout
Date:
On Wed, Jul 11, 2001 at 12:06:05PM +0000, markMLl.pgsql-general@telemetry.co.uk wrote:
> > Linus no longer has a filesystem file size limit ( or at least on
> > that you'll hit easily)
>
> I'm not concerned with "easily". Telling one of our customers that we
> chose a particular server becuase they won't easily hit limits is a
> non-starter.

Many people would have great difficulty hitting 4 terabytes.

What the limit on NT?
--
Martijn van Oosterhout <kleptog@svana.org>
http://svana.org/kleptog/
> It would be nice if someone came up with a certification system that
> actually separated those who can barely regurgitate what they crammed over
> the last few weeks from those who command secret ninja networking powers.

JDBC and stored procedures

From
Tony Grant
Date:
Hello,

I am trying to use a stored procedure via JDBC. The objective is to be
able to get data from more than one table. My procedure is a simple get
country name from table countries where contry code = $1 copied from
Bruces book.

Ultradev is giving me "Error calling GetProcedures: An unidentified
error has occured"

Just thought I would ask here first if I am up against a brick wall?

Cheers

Tony Grant

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html


Re: [JDBC] JDBC and stored procedures

From
Dave Cramer
Date:
Tony,

The GetProcedures function in the driver does not work.
You should be able to a simple select of the stored proc however

Dave

On July 11, 2001 09:06 am, Tony Grant wrote:
> Hello,
>
> I am trying to use a stored procedure via JDBC. The objective is to be
> able to get data from more than one table. My procedure is a simple get
> country name from table countries where contry code = $1 copied from
> Bruces book.
>
> Ultradev is giving me "Error calling GetProcedures: An unidentified
> error has occured"
>
> Just thought I would ask here first if I am up against a brick wall?
>
> Cheers
>
> Tony Grant
>
> --
> RedHat Linux on Sony Vaio C1XD/S
> http://www.animaproductions.com/linux2.html
> Macromedia UltraDev with PostgreSQL
> http://www.animaproductions.com/ultra.html
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)


Re: [JDBC] JDBC and stored procedures

From
Tony Grant
Date:
On 11 Jul 2001 10:20:29 -0400, Dave Cramer wrote:

> The GetProcedures function in the driver does not work.

OK. I bet it is on the todo list =:-D

> You should be able to a simple select of the stored proc however

Yes! thank you very much!!!

SELECT getcountryname(director.country)

did the trick where getcountryname is the function (or stored procedure)

Cheers

Tony

--
RedHat Linux on Sony Vaio C1XD/S
http://www.animaproductions.com/linux2.html
Macromedia UltraDev with PostgreSQL
http://www.animaproductions.com/ultra.html


Re: 2 gig file size limit

From
markMLl.pgsql-general@telemetry.co.uk
Date:
Martijn van Oosterhout wrote:

> What the limit on NT?

I'm told 2^64 bytes. Frankly, I'd be surprised if MS has tested it :-)

--
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or
colleagues]

Re: Re: Backups WAS: 2 gig file size limit

From
Thomas Lockhart
Date:
> I mentioned this on general a while ago.

I'm not usually there/here, but subscribed recently to avoid annoying
bounce messages from replies to messages cross posted to -hackers. I may
not stay long, since the volume is hard to keep up with.

> I had the problem when I dumped my 7.0.3 db to upgrade to 7.1.  I had to
> modify the dump because there were some 60 seconds in there.  It was
> obvious in the code in backend/utils/adt/datetime that it was using
> sprintf to do the formatting, and sprintf was taking the the float the
> represented the seconds and rounding it.
>
>  select '2001-07-10 15:39:59.999'::timestamp;
>          ?column?
> ---------------------------
>  2001-07-10 15:39:60.00-04
> (1 row)

Ah, right. I remember that now. Will continue to look at it...

                   - Thomas