Re: About the relation between fragmentation of file and - Mailing list pgsql-performance

From David Lang
Subject Re: About the relation between fragmentation of file and
Date
Msg-id Pine.LNX.4.62.0512010355580.2807@qnivq.ynat.uz
Whole thread Raw
In response to Re: About the relation between fragmentation of file and  (Richard Huxton <dev@archonet.com>)
List pgsql-performance
On Thu, 1 Dec 2005, Richard Huxton wrote:

> Tatsumi Abe wrote:
>> Question is about the relation between fragmentation of file and VACUUM
>> performance.
>>
>> <Environment>
>> OS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)
>>     Kernel 2.4.21-37.ELsmp on an i686
>>     Filesystem Type ext3
>>     Filesystem features:  has_journal filetype needs_recovery sparse_super large_file

try different filesystems, ext2/3 do a very poor job when you have lots of
files in a directory (and 7000+ files is a lot). you can also try mounting
the filesystem with noatime, nodiratime to reduce the seeks when reading,
and try mounting it with oldalloc (which changes how the files are
arranged on disk when writing and extending them), I've seen drastic
speed differences between ext2 and ext3 based on this option (ext2
defaults to oldalloc, ext3 defaults to orlov, which is faster in many
cases)

>> CPU:Intel(R) Xeon(TM) CPU 2.80GHz stepping 01
>> Memory:2.0GB
>> HDD:80GB(S-ATA)
>>      SATA max UDMA/133
>> PostgreSQL:7.3.8
>>
>> <DB Environment>
>> 1. Approx. there are 3500 tables in the DB
>
>> When the performance of inserting data was measured in the above-
>> mentioned environment, it takes six minutes to write 10000 lines
>> after 4/5 days the measurement had begun. While searching the reason
>> of bottleneck by executing iostat command it is understood that DISK I/O
>> was problem for the neck as %iowait was almost 100% at that time.
>>
>> On the very first day processing time of VACUUM is not a problem but
>> when the day progress its process time is increasing.Then I examined the
>> fragmentation of database area(pgsql/data/base) by using the following tools.
>>
>> Disk Allocation Viewer
>> http://sourceforge.net/projects/davtools/
>>
>> Fragmentation rate is 28% before defrag.
>
> I'd guess the root of your problem is the number of tables (3500), which
> if each has one index represents at least 7000 files. That means a lot
> of your I/O time will probably be spent moving the disk heads between
> the different files.

depending on the size of the tables it can actually be a lot worse then
this (remember Postgres splits the tables into fixed size chunks)

when postgres adds data it will eventually spill over into additional
files, when you do a vaccum does it re-write the tables into a smaller
number of files or just rewrite the individual files (makeing each of them
smaller, but keeping the same number of files)

speaking of this, the selection of the size of these chunks is a
comprimize between the time needed to seek in an individual file and the
number of files that are created, is there an easy way to tinker with this
(I am sure the default is not correct for all filesystems, the filesystem
handling of large and/or many files differ drasticly)

> You say you can't stop the server, so there's no point in thinking about
> a quick hardware upgrade to help you. Also a version-upgrade is not
> do-able for you.

there's a difference between stopping the server once for an upgrade
(hardware or software) and having to stop it every few days to defrag
things forever after.

David Lang

> I can only think of two other options:
> 1. Change the database schema to reduce the number of tables involved.
> I'm assuming that of the 3500 tables most hold the same data but for
> different clients (or something similar). This might not be practical
> either.
>
> 2. Re-order how you access the database. ANALYSE the updated tables
> regularly, but only VACUUM them after deletions. Group your inserts so
> that all the inserts for table1 go together, then all the inserts for
> table2 go together and so on. This should help with the fragmentation by
> making sure the files get extended in larger chunks.
>
> Are you sure it's not possible to spend 15 mins offline to solve this?
> --
>  Richard Huxton
>  Archonet Ltd
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>       choose an index scan if your joining column's datatypes do not
>       match
>

pgsql-performance by date:

Previous
From: Michael Riess
Date:
Subject: Re: 15,000 tables
Next
From: Jaime Casanova
Date:
Subject: Re: 15,000 tables