Thread: Having problems with a 25 million row table on 8.1.3

Having problems with a 25 million row table on 8.1.3

From
Tony Caduto
Date:
Hi,
I have a client who has a 25 million row table that is used to keep
track of financial security info.
So far it has worked great, but today someone wanted to get all the tax
codes(there are lot's of dupes) from the table.
So we tried this:

select DISTINCT tax_code from warehouse.sec_trans
We let this run for 1/2 hour or so and canceled it.

Then I tried select DISTINCT ON (tax_code) tax_code from warehouse.sec_trans

same deal, had to cancel it.

The server has a mirrored raid setup on two drives(yes I know this is
not a good setup, but it's what they have) with 2GB of ram.

I have the kernels (Linux CentOS 4.3) shared memory size set to:
 kernel.shmmax = 262144000

Here is the postgresql.conf entries for memory that have been changed:

# - Memory -

shared_buffers = 15000                  # min 16 or max_connections*2,
8KB each
#temp_buffers = 1000                    # min 100, 8KB each
#max_prepared_transactions = 5          # can be 0 or more
# note: increasing max_prepared_transactions costs ~600 bytes of shared
memory
# per transaction slot, plus lock space (see max_locks_per_transaction).
work_mem = 10240                        # min 64, size in KB
maintenance_work_mem = 32768    # min 1024, size in KB
#max_stack_depth = 2048                 # min 100, size in KB

Anyone have any ideas on how to get all the unique tax codes from this
table?


Thanks in advance :-)



Tony

Re: Having problems with a 25 million row table on 8.1.3

From
Martijn van Oosterhout
Date:
On Tue, Apr 25, 2006 at 03:10:32PM -0500, Tony Caduto wrote:
> Hi,
> I have a client who has a 25 million row table that is used to keep
> track of financial security info.
> So far it has worked great, but today someone wanted to get all the tax
> codes(there are lot's of dupes) from the table.
> So we tried this:
>
> select DISTINCT tax_code from warehouse.sec_trans
> We let this run for 1/2 hour or so and canceled it.

What plan did it want to use (EXPLAIN query)? What version of
PostgreSQL? How many results are you expecting?

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: Having problems with a 25 million row table on 8.1.3

From
"Joshua D. Drake"
Date:
> # - Memory -
>
> shared_buffers = 15000                  # min 16 or max_connections*2,
> 8KB each
> #temp_buffers = 1000                    # min 100, 8KB each
> #max_prepared_transactions = 5          # can be 0 or more
> # note: increasing max_prepared_transactions costs ~600 bytes of shared
> memory
> # per transaction slot, plus lock space (see max_locks_per_transaction).
> work_mem = 10240                        # min 64, size in KB
> maintenance_work_mem = 32768    # min 1024, size in KB
> #max_stack_depth = 2048                 # min 100, size in KB
>
> Anyone have any ideas on how to get all the unique tax codes from this
> table?

Push it to a cursor and select only pieces at a time?

Josuha D. Drkae

>
>
> Thanks in advance :-)
>
>
>
> Tony
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match
>


--

            === The PostgreSQL Company: Command Prompt, Inc. ===
      Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
      Providing the most comprehensive  PostgreSQL solutions since 1997
                     http://www.commandprompt.com/



Re: Having problems with a 25 million row table on 8.1.3

From
Andrew - Supernews
Date:
On 2006-04-25, Tony Caduto <tony_caduto@amsoftwaredesign.com> wrote:
> Hi,
> I have a client who has a 25 million row table that is used to keep
> track of financial security info.
> So far it has worked great, but today someone wanted to get all the tax
> codes(there are lot's of dupes) from the table.
> So we tried this:
>
> select DISTINCT tax_code from warehouse.sec_trans
> We let this run for 1/2 hour or so and canceled it.
>
> Then I tried select DISTINCT ON (tax_code) tax_code from warehouse.sec_trans
>
> same deal, had to cancel it.

select tax_code from warehouse.sec_trans group by tax_code;

Both of the DISTINCT variants rely on sorting. GROUP BY, on the other hand,
can use a hash aggregate, which will be much more efficient where the number
of distinct values is low. Of course it will still need to scan the whole
table...

--
Andrew, Supernews
http://www.supernews.com - individual and corporate NNTP services

Re: Having problems with a 25 million row table on 8.1.3

From
Simon Riggs
Date:
On Tue, 2006-04-25 at 15:10 -0500, Tony Caduto wrote:

> I have a client who has a 25 million row table that is used to keep

> work_mem = 10240                        # min 64, size in KB

Try putting work_mem a lot higher to improve the sort speed.

--
  Simon Riggs
  EnterpriseDB   http://www.enterprisedb.com


Re: Having problems with a 25 million row table on 8.1.3

From
Aaron Evans
Date:
try:

select tax_code from warehouse.sec_trans group by tax_code

there was a discussion about this on the pgsql-performance a while back:

http://archives.postgresql.org/pgsql-performance/2004-10/msg00053.php

-ae

On Apr 25, 2006, at 4:10 PM, Tony Caduto wrote:

> select DISTINCT tax_code from warehouse.sec_trans We let this run
> for 1/2 hour or so and canceled it.
>
> Then I tried select DISTINCT ON (tax_code) tax_code from
> warehouse.sec_trans

Re: Having problems with a 25 million row table on 8.1.3

From
"hubert depesz lubaczewski"
Date:
On 4/25/06, Tony Caduto <tony_caduto@amsoftwaredesign.com> wrote:
So far it has worked great, but today someone wanted to get all the tax
codes(there are lot's of dupes) from the table.

others already suggested group by approach.
i would like to also suggest use of triggers to maintain tax codes list in separate table.
it would greatly speedup every future query that will need this list.

depesz