Re: Large rows number, and large objects

From: Andrzej Nakonieczny
Subject: Re: Large rows number, and large objects
Date: ,
Msg-id: 4E272DA1.6080205@e-list.pingwin.eu.org
(view: Whole thread, Raw)
In response to: Re: Large rows number, and large objects  (Jose Ildefonso Camargo Tolosa)
Responses: Re: Large rows number, and large objects  (Jose Ildefonso Camargo Tolosa)
List: pgsql-performance

Tree view

Large rows number, and large objects  (Jose Ildefonso Camargo Tolosa, )
 Re: Large rows number, and large objects  (Samuel Gendler, )
  Re: Large rows number, and large objects  (Craig James, )
   Re: Large rows number, and large objects  (Jose Ildefonso Camargo Tolosa, )
    Re: Large rows number, and large objects  (Robert Haas, )
     Re: Large rows number, and large objects  (Jose Ildefonso Camargo Tolosa, )
      Re: Large rows number, and large objects  (Robert Haas, )
      Re: Large rows number, and large objects  (Andrzej Nakonieczny, )
       Re: Large rows number, and large objects  (Jose Ildefonso Camargo Tolosa, )

W dniu 20.07.2011 17:57, Jose Ildefonso Camargo Tolosa pisze:

[...]

>     Many of the advantages of partitioning have to do with maintenance
>     tasks.  For example, if you gather data on a daily basis, it's faster
>     to drop the partition that contains Thursday's data than it is to do a
>     DELETE that finds the rows and deletes them one at a time.  And VACUUM
>     can be a problem on very large tables as well, because only one VACUUM
>     can run on a table at any given time.  If the frequency with which the
>     table needs to be vacuumed is less than the time it takes for VACUUM
>     to complete, then you've got a problem.
>
>
> And.... pg_largeobject table doesn't get vacuumed? I mean, isn't that
> table just as any other table?

Vacuum is a real problem on big pg_largeobject table. I have 1.6 TB
database mostly with large objects and vacuuming that table on fast SAN
takes about 4 hours:

         now          |        start        |   time   |  datname   |
        current_query
---------------------+---------------------+----------+------------+----------------------------------------------
  2011-07-20 20:12:03 | 2011-07-20 16:21:20 | 03:50:43 | bigdb      |
autovacuum: VACUUM pg_catalog.pg_largeobject
(1 row)


LO generates a lot of dead tuples when object are adding:

      relname      | n_dead_tup
------------------+------------
  pg_largeobject   |     246731

Adding LO is very fast when table is vacuumed. But when there is a lot
of dead tuples adding LO is very slow (50-100 times slower) and eats
100% of CPU.

It looks that better way is writing object directly as a bytea on
paritioned tables althought it's a bit slower than LO interface on a
vacuumed table.


Regards,
Andrzej


pgsql-performance by date:

From: Andrzej Nakonieczny
Date:
Subject: Re: Large rows number, and large objects
From: Klaus Ita
Date:
Subject: Re: BBU still needed with SSD?