Thread: Inserts or Updates

Inserts or Updates

From
Ofer Israeli
Date:

Hi all,

 

We are currently “stuck” with a performance bottleneck in our server using PG and we are thinking of two potential solutions which I would be happy to hear your opinion about.

 

Our system has a couple of tables that hold client generated information.  The clients communicate every minute with the server and thus we perform an update on these two tables every minute.  We are talking about ~50K clients (and therefore records).

 

These constant updates have made the table sizes to grow drastically and index bloating.  So the two solutions that we are talking about are:

  1. Configure autovacuum to work more intensively in both time and cost parameters.

Pros:

Not a major architectural change.

Cons:

Autovacuum does not handle index bloating and thus we will need to periodically reindex the tables.

Perhaps we will also need to run vacuum full periodically if the autovacuum cleaning is not at the required pace and therefore defragmentation of the tables is needed?

 

  1. Creating a new table every minute and inserting the data into this new temporary table (only inserts).  This process will happen every minute.  Note that in this process we will also need to copy missing data (clients that didn’t communicate) from older table.

Pros:

Tables are always compact.

We will not reach a limit of autovacuum.

Cons:

Major architectural change.

 

So to sum it up, we would be happy to refrain from performing a major change to the system (solution #2), but we are not certain that the correct way to work in our situation, constant updates of records, is to configure an aggressive autovacuum or perhaps the “known methodology” is to work with temporary tables that are always inserted into?

 

 

Thank you,

Ofer

Re: Inserts or Updates

From
"Kevin Grittner"
Date:
Ofer Israeli  wrote:

> Our system has a couple of tables that hold client generated
> information. The clients communicate every minute with the server
> and thus we perform an update on these two tables every minute. We
> are talking about ~50K clients (and therefore records).
>
> These constant updates have made the table sizes to grow
> drastically and index bloating. So the two solutions that we are
> talking about are:
>
> 1. Configure autovacuum to work more intensively in both time and
> cost parameters.
> Pros:
> Not a major architectural change.
> Cons:
> Autovacuum does not handle index bloating and thus we will need to
> periodically reindex the tables.

Done aggressively enough, autovacuum should prevent index bloat, too.

> Perhaps we will also need to run vacuum full periodically if the
> autovacuum cleaning is not at the required pace and therefore
> defragmentation of the tables is needed?

The other thing that can cause bloat in this situation is a
long-running transaction.  To correct occasional bloat due to that on
small frequently-updated tables we run CLUSTER on them daily during
off-peak hours.  If you are on version 9.0 or later, VACUUM FULL
instead would be fine.  While this locks the table against other
action while it runs, on a small table it is a small enough fraction
of a second that nobody notices.

> 1. Creating a new table every minute and inserting the data into
> this new temporary table (only inserts). This process will happen
> every minute. Note that in this process we will also need to copy
> missing data (clients that didn't communicate) from older table.
> Pros:
> Tables are always compact.
> We will not reach a limit of autovacuum.
> Cons:
> Major architectural change.

I would try the other alternative first.

-Kevin

Re: Inserts or Updates

From
Andy Colson
Date:
On 2/7/2012 4:18 AM, Ofer Israeli wrote:
> Hi all,
>
> We are currently “stuck” with a performance bottleneck in our server
> using PG and we are thinking of two potential solutions which I would be
> happy to hear your opinion about.
>
> Our system has a couple of tables that hold client generated
> information. The clients communicate *every* minute with the server and
> thus we perform an update on these two tables every minute. We are
> talking about ~50K clients (and therefore records).
>
> These constant updates have made the table sizes to grow drastically and
> index bloating. So the two solutions that we are talking about are:
>

You dont give any table details, so I'll have to guess.  Maybe you have
too many indexes on your table?  Or, you dont have a good primary index,
which means your updates are changing the primary key?

If you only have a primary index, and you are not changing it, Pg should
be able to do HOT updates.

If you have lots of indexes, you should review them, you probably don't
need half of them.


And like Kevin said, try the simple one first.  Wont hurt anything, and
if it works, great!

-Andy

Re: Inserts or Updates

From
Ofer Israeli
Date:
Thanks Kevin for the ideas.  Now that you have corrected our misconception regarding the autovacuum not handling index
bloating,we are looking into running autovacuum frequently enough to make sure we don't have significant increase in
tablesize or index size.  We intend to keep our transactions short enough not to reach the situation where vacuum full
orCLUSTER is needed. 

Thanks,
Ofer

-----Original Message-----
From: Kevin Grittner [mailto:Kevin.Grittner@wicourts.gov]
Sent: Tuesday, February 07, 2012 2:28 PM
To: Ofer Israeli; pgsql-performance@postgresql.org
Cc: Netta Kabala; Olga Vingurt
Subject: Re: [PERFORM] Inserts or Updates

Ofer Israeli  wrote:

> Our system has a couple of tables that hold client generated
> information. The clients communicate every minute with the server
> and thus we perform an update on these two tables every minute. We
> are talking about ~50K clients (and therefore records).
>
> These constant updates have made the table sizes to grow
> drastically and index bloating. So the two solutions that we are
> talking about are:
>
> 1. Configure autovacuum to work more intensively in both time and
> cost parameters.
> Pros:
> Not a major architectural change.
> Cons:
> Autovacuum does not handle index bloating and thus we will need to
> periodically reindex the tables.

Done aggressively enough, autovacuum should prevent index bloat, too.

> Perhaps we will also need to run vacuum full periodically if the
> autovacuum cleaning is not at the required pace and therefore
> defragmentation of the tables is needed?

The other thing that can cause bloat in this situation is a
long-running transaction.  To correct occasional bloat due to that on
small frequently-updated tables we run CLUSTER on them daily during
off-peak hours.  If you are on version 9.0 or later, VACUUM FULL
instead would be fine.  While this locks the table against other
action while it runs, on a small table it is a small enough fraction
of a second that nobody notices.

> 1. Creating a new table every minute and inserting the data into
> this new temporary table (only inserts). This process will happen
> every minute. Note that in this process we will also need to copy
> missing data (clients that didn't communicate) from older table.
> Pros:
> Tables are always compact.
> We will not reach a limit of autovacuum.
> Cons:
> Major architectural change.

I would try the other alternative first.

-Kevin

Scanned by Check Point Total Security Gateway.

Re: Inserts or Updates

From
Claudio Freire
Date:
On Tue, Feb 7, 2012 at 2:27 PM, Ofer Israeli <oferi@checkpoint.com> wrote:
> Thanks Kevin for the ideas.  Now that you have corrected our misconception regarding the autovacuum not handling
indexbloating, we are looking into running autovacuum frequently enough to make sure we don't have significant increase
intable size or index size.  We intend to keep our transactions short enough not to reach the situation where vacuum
fullor CLUSTER is needed. 

Also, rather than going overboard with autovacuum settings, do make it
more aggressive, but also set up a regular, manual vacuum of either
the whole database or whatever tables you need to vacuum at
known-low-load hours.

Re: Inserts or Updates

From
Ofer Israeli
Date:
Hi Andy,

The two tables I am referring to have the following specs:
Table 1:
46 columns
23 indexes on fields of the following types:
INTEGER - 7
TIMESTAMP - 2
VARCHAR - 12
UUID - 2

23 columns
12 indexes on fields of the following types:
INTEGER - 3
TIMESTAMP - 1
VARCHAR - 6
UUID - 2

All indexes are default indexes.

The primary index is INTERGER and is not updated.

The indexes are used for sorting and filtering purposes in our UI.


I will be happy to hear your thoughts on this.

Thanks,
Ofer

-----Original Message-----
From: Andy Colson [mailto:andy@squeakycode.net]
Sent: Tuesday, February 07, 2012 4:47 PM
To: Ofer Israeli
Cc: pgsql-performance@postgresql.org; Olga Vingurt; Netta Kabala
Subject: Re: [PERFORM] Inserts or Updates

On 2/7/2012 4:18 AM, Ofer Israeli wrote:
> Hi all,
>
> We are currently "stuck" with a performance bottleneck in our server
> using PG and we are thinking of two potential solutions which I would be
> happy to hear your opinion about.
>
> Our system has a couple of tables that hold client generated
> information. The clients communicate *every* minute with the server and
> thus we perform an update on these two tables every minute. We are
> talking about ~50K clients (and therefore records).
>
> These constant updates have made the table sizes to grow drastically and
> index bloating. So the two solutions that we are talking about are:
>

You dont give any table details, so I'll have to guess.  Maybe you have
too many indexes on your table?  Or, you dont have a good primary index,
which means your updates are changing the primary key?

If you only have a primary index, and you are not changing it, Pg should
be able to do HOT updates.

If you have lots of indexes, you should review them, you probably don't
need half of them.


And like Kevin said, try the simple one first.  Wont hurt anything, and
if it works, great!

-Andy

Scanned by Check Point Total Security Gateway.

Re: Inserts or Updates

From
Ofer Israeli
Date:
Hi Claudio,

You mean running a VACUUM statement manually?  I would basically try to avoid such a situation as the way I see it, the
databaseshould be configured in such a manner that it will be able to handle the load at any given moment and so I
wouldn'twant to manually intervene here.  If you think differently, I'll be happy to stand corrected. 


Thanks,
Ofer


-----Original Message-----
From: pgsql-performance-owner@postgresql.org [mailto:pgsql-performance-owner@postgresql.org] On Behalf Of Claudio
Freire
Sent: Tuesday, February 07, 2012 7:31 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Inserts or Updates

On Tue, Feb 7, 2012 at 2:27 PM, Ofer Israeli <oferi@checkpoint.com> wrote:
> Thanks Kevin for the ideas.  Now that you have corrected our misconception regarding the autovacuum not handling
indexbloating, we are looking into running autovacuum frequently enough to make sure we don't have significant increase
intable size or index size.  We intend to keep our transactions short enough not to reach the situation where vacuum
fullor CLUSTER is needed. 

Also, rather than going overboard with autovacuum settings, do make it
more aggressive, but also set up a regular, manual vacuum of either
the whole database or whatever tables you need to vacuum at
known-low-load hours.

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Scanned by Check Point Total Security Gateway.

Re: Inserts or Updates

From
Claudio Freire
Date:
On Tue, Feb 7, 2012 at 2:43 PM, Ofer Israeli <oferi@checkpoint.com> wrote:
> You mean running a VACUUM statement manually?  I would basically try to avoid such a situation as the way I see it,
thedatabase should be configured in such a manner that it will be able to handle the load at any given moment and so I
wouldn'twant to manually intervene here.  If you think differently, I'll be happy to stand corrected. 

I do think differently.

Autovacuum isn't perfect, and you shouldn't make it too aggressive
since it does generate a lot of I/O activity. If you can pick a time
where it will be able to run without interfering too much, running
vacuum "manually" (where manually could easily be a cron task, ie,
automatically but coming from outside the database software itself),
you'll be able to dial down autovacuum and have more predictable load
overall.

Re: Inserts or Updates

From
Ofer Israeli
Date:
>> You mean running a VACUUM statement manually?  I would basically try to
>> avoid such a situation as the way I see it, the database should be
>> configured in such a manner that it will be able to handle the load at
>> any given moment and so I wouldn't want to manually intervene here.  If
>> you think differently, I'll be happy to stand corrected.
>
> I do think differently.
>
> Autovacuum isn't perfect, and you shouldn't make it too aggressive
> since it does generate a lot of I/O activity. If you can pick a time
> where it will be able to run without interfering too much, running
> vacuum "manually" (where manually could easily be a cron task, ie,
> automatically but coming from outside the database software itself),
> you'll be able to dial down autovacuum and have more predictable load
> overall.
>


Something specific that you refer to in autovacuum's non-perfection, that is, what types of issues are you aware of?

As for the I/O - this is indeed true that it can generate much activity, but the way I see it, if you run performance
testsand the tests succeed in all parameters even with heavy I/O, then you are good to go.  That is, I don't mind the
serverdoing lots of I/O as long as it's not causing lags in processing the messages that it handles. 


Thanks,
Ofer


Re: Inserts or Updates

From
Claudio Freire
Date:
On Tue, Feb 7, 2012 at 4:12 PM, Ofer Israeli <oferi@checkpoint.com> wrote:
> Something specific that you refer to in autovacuum's non-perfection, that is, what types of issues are you aware of?

I refer to its criteria for when to perform vacuum/analyze. Especially
analyze. It usually fails to detect the requirement to analyze a table
- sometimes value distributions change without triggering an
autoanalyze. It's expected, as the autoanalyze works on number of
tuples updates/inserted relative to table size, which is too generic
to catch business-specific conditions.

As everything, it depends on your business. The usage pattern, the
kinds of updates performed, how data varies in time... but in essence,
I've found that forcing a periodic vacuum/analyze of tables beyond
what autovacuum does improves stability. You know a lot more about the
business and access/update patterns than autovacuum, so you can
schedule them where they are needed and autovacuum wouldn't.

> As for the I/O - this is indeed true that it can generate much activity, but the way I see it, if you run performance
testsand the tests succeed in all parameters even with heavy I/O, then you are good to go.  That is, I don't mind the
serverdoing lots of I/O as long as it's not causing lags in processing the messages that it handles. 

If you don't mind the I/O, by all means, crank it up.

Re: Inserts or Updates

From
Andy Colson
Date:
> -----Original Message-----
> From: Andy Colson [mailto:andy@squeakycode.net]
> Sent: Tuesday, February 07, 2012 4:47 PM
> To: Ofer Israeli
> Cc: pgsql-performance@postgresql.org; Olga Vingurt; Netta Kabala
> Subject: Re: [PERFORM] Inserts or Updates
>
> On 2/7/2012 4:18 AM, Ofer Israeli wrote:
>> Hi all,
>>
>> We are currently "stuck" with a performance bottleneck in our server
>> using PG and we are thinking of two potential solutions which I would be
>> happy to hear your opinion about.
>>
>> Our system has a couple of tables that hold client generated
>> information. The clients communicate *every* minute with the server and
>> thus we perform an update on these two tables every minute. We are
>> talking about ~50K clients (and therefore records).
>>
>> These constant updates have made the table sizes to grow drastically and
>> index bloating. So the two solutions that we are talking about are:
>>
>
> You dont give any table details, so I'll have to guess.  Maybe you have
> too many indexes on your table?  Or, you dont have a good primary index,
> which means your updates are changing the primary key?
>
> If you only have a primary index, and you are not changing it, Pg should
> be able to do HOT updates.
>
> If you have lots of indexes, you should review them, you probably don't
> need half of them.
>
>
> And like Kevin said, try the simple one first.  Wont hurt anything, and
> if it works, great!
>
> -Andy
>


On 2/7/2012 11:40 AM, Ofer Israeli wrote:
 > Hi Andy,
 >
 > The two tables I am referring to have the following specs:
 > Table 1:
 > 46 columns
 > 23 indexes on fields of the following types:
 > INTEGER - 7
 > TIMESTAMP - 2
 > VARCHAR - 12
 > UUID - 2
 >
 > 23 columns
 > 12 indexes on fields of the following types:
 > INTEGER - 3
 > TIMESTAMP - 1
 > VARCHAR - 6
 > UUID - 2
 >
 > All indexes are default indexes.
 >
 > The primary index is INTERGER and is not updated.
 >
 > The indexes are used for sorting and filtering purposes in our UI.
 >
 >
 > I will be happy to hear your thoughts on this.
 >
 > Thanks,
 > Ofer
 >

Fixed that top post for ya.

Wow, so out of 46 columns, half of them have indexes?  That's a lot.
I'd bet you could drop a bunch of them.  You should review them and see
if they are actually helping you.  You already found out that maintain
all those indexes is painful.  If they are not speeding up your SELECT's
by a huge amount, you should drop them.

Sounds like you went thru your sql statements and any field that was
either in the where or order by clause you added an index for?

You need to find the columns that are the most selective.  An index
should be useful at cutting the number of rows down.  Once you have it
cut down, an index on another field wont really help that much.  And
after a result set has been collected, an index may or may not help for
sorting.

Running some queries with EXPLAIN ANALYZE would be helpful.  Give it a
run, drop an index, try it again to see if its about the same, or if
that index made a difference.

-Andy

Re: Inserts or Updates

From
"Kevin Grittner"
Date:
Andy Colson <andy@squeakycode.net> wrote:

> Wow, so out of 46 columns, half of them have indexes?  That's a
> lot.  I'd bet you could drop a bunch of them.  You should review
> them and see if they are actually helping you.  You already found
> out that maintain all those indexes is painful.  If they are not
> speeding up your SELECT's by a huge amount, you should drop them.

You might want to review usage counts in pg_stat_user_indexes.

-Kevin

Re: Inserts or Updates

From
Andy Colson
Date:
Oh, I knew I'd seen index usage stats someplace.

give this a run:

select * from pg_stat_user_indexes where relname = 'SuperBigTable';

http://www.postgresql.org/docs/current/static/monitoring-stats.html

-Andy

Re: Inserts or Updates

From
Ofer Israeli
Date:
Claudio Freire wrote:
> On Tue, Feb 7, 2012 at 4:12 PM, Ofer Israeli <oferi@checkpoint.com>
> wrote:
>> Something specific that you refer to in autovacuum's non-perfection,
>> that is, what types of issues are you aware of?
>
> I refer to its criteria for when to perform vacuum/analyze.
> Especially analyze. It usually fails to detect the requirement to
> analyze a table - sometimes value distributions change without
> triggering an autoanalyze. It's expected, as the autoanalyze works on
> number of tuples updates/inserted relative to table size, which is
> too generic to catch business-specific conditions.
>
> As everything, it depends on your business. The usage pattern, the
> kinds of updates performed, how data varies in time... but in
> essence, I've found that forcing a periodic vacuum/analyze of tables
> beyond what autovacuum does improves stability. You know a lot more
> about the business and access/update patterns than autovacuum, so you
> can schedule them where they are needed and autovacuum wouldn't.
>
>> As for the I/O - this is indeed true that it can generate much
>> activity, but the way I see it, if you run performance tests and the
>> tests succeed in all parameters even with heavy I/O, then you are
>> good to go.  That is, I don't mind the server doing lots of I/O as
>> long as it's not causing lags in processing the messages that it
>> handles.
>
> If you don't mind the I/O, by all means, crank it up.


Thanks for the lep Claudio.  We're looking into both these options.

Re: Inserts or Updates

From
Ofer Israeli
Date:
Andy Colson wrote:
> Oh, I knew I'd seen index usage stats someplace.
>
> give this a run:
>
> select * from pg_stat_user_indexes where relname = 'SuperBigTable';
>
> http://www.postgresql.org/docs/current/static/monitoring-stats.html
>
> -Andy
>
> Scanned by Check Point Total Security Gateway.


Thanks.  We have begun analyzing the indexes and indeed found many are pretty useless and will be removed.

Re: Inserts or Updates

From
Vik Reykja
Date:
On Wed, Feb 8, 2012 at 20:22, Ofer Israeli <oferi@checkpoint.com> wrote:
Andy Colson wrote:
> Oh, I knew I'd seen index usage stats someplace.
>
> give this a run:
>
> select * from pg_stat_user_indexes where relname = 'SuperBigTable';
>
> http://www.postgresql.org/docs/current/static/monitoring-stats.html
>
> -Andy
>
> Scanned by Check Point Total Security Gateway.


Thanks.  We have begun analyzing the indexes and indeed found many are pretty useless and will be removed.

A quick word of warning: not all indexes are used for querying, some are used for maintaining constraints and foreign keys. These show up as "useless" in the above query.

Re: Inserts or Updates

From
Frank Lanitz
Date:
Am 07.02.2012 18:40, schrieb Ofer Israeli:
> Table 1:
> 46 columns
> 23 indexes on fields of the following types:
> INTEGER - 7
> TIMESTAMP - 2
> VARCHAR - 12
> UUID - 2
>
> 23 columns
> 12 indexes on fields of the following types:
> INTEGER - 3
> TIMESTAMP - 1
> VARCHAR - 6
> UUID - 2

Are you regularly updating all columns? If not, maybe a good idea to
split the tables so highly updated columns don't effect complete line.

cheers,
Frank

Re: Inserts or Updates

From
Ofer Israeli
Date:
Frank Lanitz wrote:
> Am 07.02.2012 18:40, schrieb Ofer Israeli:
>> Table 1:
>> 46 columns
>> 23 indexes on fields of the following types:
>> INTEGER - 7
>> TIMESTAMP - 2
>> VARCHAR - 12
>> UUID - 2
>>
>> 23 columns
>> 12 indexes on fields of the following types:
>> INTEGER - 3
>> TIMESTAMP - 1
>> VARCHAR - 6
>> UUID - 2
>
> Are you regularly updating all columns? If not, maybe a good idea to
> split the tables so highly updated columns don't effect complete
> line.

We're not always updating all of the columns, but the reason for consolidating all the columns into one table is for UI
purposes- in the past, they had done benchmarks and found the JOINs to be extremely slow and so all data was
consolidatedinto one table. 

Thanks,
Ofer

Re: Inserts or Updates

From
Frank Lanitz
Date:
Am 12.02.2012 11:48, schrieb Ofer Israeli:
> Frank Lanitz wrote:
>>> Am 07.02.2012 18:40, schrieb Ofer Israeli:
>>>>> Table 1: 46 columns 23 indexes on fields of the following
>>>>> types: INTEGER - 7 TIMESTAMP - 2 VARCHAR - 12 UUID - 2
>>>>>
>>>>> 23 columns 12 indexes on fields of the following types:
>>>>> INTEGER - 3 TIMESTAMP - 1 VARCHAR - 6 UUID - 2
>>>
>>> Are you regularly updating all columns? If not, maybe a good idea
>>> to split the tables so highly updated columns don't effect
>>> complete line.
> We're not always updating all of the columns, but the reason for
> consolidating all the columns into one table is for UI purposes - in
> the past, they had done benchmarks and found the JOINs to be
> extremely slow and so all data was consolidated into one table.

Ah... I see. Maybe you can check whether all of the data are really
needed to fetch with one select but this might end up in tooo much
guessing and based on your feedback you already did this step.

Cheers,
Frank


Attachment

Re: Inserts or Updates

From
Ofer Israeli
Date:
Frank Lanitz wrote:
> Am 12.02.2012 11:48, schrieb Ofer Israeli:
>> Frank Lanitz wrote:
>>>> Am 07.02.2012 18:40, schrieb Ofer Israeli:
>>>>>> Table 1: 46 columns 23 indexes on fields of the following
>>>>>> types: INTEGER - 7 TIMESTAMP - 2 VARCHAR - 12 UUID - 2
>>>>>>
>>>>>> 23 columns 12 indexes on fields of the following types:
>>>>>> INTEGER - 3 TIMESTAMP - 1 VARCHAR - 6 UUID - 2
>>>>
>>>> Are you regularly updating all columns? If not, maybe a good idea
>>>> to split the tables so highly updated columns don't effect complete
>>>> line.
>> We're not always updating all of the columns, but the reason for
>> consolidating all the columns into one table is for UI purposes - in
>> the past, they had done benchmarks and found the JOINs to be
>> extremely slow and so all data was consolidated into one table.
>
> Ah... I see. Maybe you can check whether all of the data are really
> needed to fetch with one select but this might end up in tooo much
> guessing and based on your feedback you already did this step.


This was indeed checked, but I'm not sure it was thorough enough so we're having a go at it again.  In the meanwhile,
theautovacuum configurations have proved to help us immensely so for now we're good (will probably be asking around
soonwhen we hit our next bottleneck :)).  Thanks for your help!