Re: how to handle a big table for data log - Mailing list pgsql-performance

From Jorge Montero
Subject Re: how to handle a big table for data log
Date
Msg-id 4C442B03.2E1C.0042.0@homedecorators.com
Whole thread Raw
In response to how to handle a big table for data log  (kuopo <spkuo@cs.nctu.edu.tw>)
Responses Re: how to handle a big table for data log  (kuopo <spkuo@cs.nctu.edu.tw>)
List pgsql-performance
Large tables, by themselves, are not necessarily a problem. The problem is what you might be trying to do with them. Depending on the operations you are trying to do, partitioning the table might help performance or make it worse.
 
What kind of queries are you running? How many days of history are you keeping? Could you post an explain analyze output of a query that is being problematic?
Given the amount of data you hint about, your server configuration, and custom statistic targets for the big tables in question would be useful.

>>> kuopo <spkuo@cs.nctu.edu.tw> 7/19/2010 1:27 AM >>>
Hi,

I have a situation to handle a log table which would accumulate a
large amount of logs. This table only involves insert and query
operations. To limit the table size, I tried to split this table by
date. However, the number of the logs is still large (46 million
records per day). To further limit its size, I tried to split this log
table by log type. However, this action does not improve the
performance. It is much slower than the big table solution. I guess
this is because I need to pay more cost on the auto-vacuum/analyze for
all split tables.

Can anyone comment on this situation? Thanks in advance.


kuopo.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

pgsql-performance by date:

Previous
From: Vitalii Tymchyshyn
Date:
Subject: Re: Big field, limiting and ordering
Next
From: Daniel Ferreira de Lima
Date:
Subject: Re: IDE x SAS RAID 0 on HP DL 380 G5 P400i controller performance problem