Re: Inefficient escape codes. - Mailing list pgsql-performance

From Rodrigo Madera
Subject Re: Inefficient escape codes.
Date
Msg-id 3cf983d0510250903l6eb5b3dfsda46190f3e7fd289@mail.gmail.com
Whole thread Raw
In response to Inefficient escape codes.  (Rodrigo Madera <rodrigo.madera@gmail.com>)
List pgsql-performance
Ok, thanks for the limits info, but I have that in the manual. Thanks.

But what I really want to know is this:

1) All large objects of all tables inside one DATABASE is kept on only one table. True or false?

Thanks =o)
Rodrigo

On 10/25/05, Nörder-Tuitje, Marcus <noerder-tuitje@technology.de> wrote:
oh, btw, no harm, but :
 
having 5000 tables only to gain access via city name is a major design flaw.
 
you might consider putting all into one table working with a distributed index over yer table (city, loc_texdt, blobfield); creating a partitioned index over city.
 
best regards
-----Ursprüngliche Nachricht-----
Von: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org]Im Auftrag von Rodrigo Madera
Gesendet: Montag, 24. Oktober 2005 21:12
An: pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] Inefficient escape codes.

Now this interests me a lot.

Please clarify this:

I have 5000 tables, one for each city:

City1_Photos, City2_Photos, ... City5000_Photos.

Each of these tables are: CREATE TABLE CityN_Photos (location text, lo_id largeobectypeiforgot)

So, what's the limit for these large objects? I heard I could only have 4 billion records for the whole database (not for each table). Is this true? If this isn't true, then would postgres manage to create all the large objects I ask him to?

Also, this would be a performance penalty, wouldn't it?

Much thanks for the knowledge shared,
Rodrigo




pgsql-performance by date:

Previous
From: Kishore B
Date:
Subject: Re: Why Index is not working on date columns.
Next
From: Michael Fuhr
Date:
Subject: Re: impact of stats_command_string