Re: vacuum, performance, and MVCC - Mailing list pgsql-hackers

From Mark Woodward
Subject Re: vacuum, performance, and MVCC
Date
Msg-id 18829.24.91.171.78.1151117873.squirrel@mail.mohawksoft.com
Whole thread Raw
In response to Re: vacuum, performance, and MVCC  (Rick Gigger <rick@alpinenetworking.com>)
Responses Re: vacuum, performance, and MVCC  (Alvaro Herrera <alvherre@commandprompt.com>)
List pgsql-hackers
> Just out of curiosity Mark, didn't you write your session daemon so
> that you don't have to put sessions in postgres anymore?

The original project started as a shared key/value system for a beowulf
cluster in the late 90s, but got reworked to be a session handler for PHP
when I worked with Stig, MSession.

> Or are you
> just giving that as an example of a very wide, very heavily updated
> table?  My session tables have been an extreme case of this problem,
> but no other table that I have is so adversely affected by this
> behavior.  My decision was not to pull postgres out entirely, just
> using other session handlers.

I have been working as a consultant since 2001, and prior to that, as CTO
at at a dot.com startup. MSession (the previous name) was used to
circumvent shortcomings in PostgreSQL, specificially the problem we are
talking about.

As a consultant, I have to convince the customer that "all is well." My
MCache system does not guarentee that no session data lost, nor does
memcached or other non-ACID system.

The technical arguments we can make, no matter how correct, leave us on
the defensive when asked "What if the server crashes, do you lose data?"
of course the answer is "yes." Then we get drawn into a conversation about
transient and unimportant data vs persistent and valuable data. At which
point you've lost the customer. A solid SQL database is the defacto
standard, perhaps not the best choice, but unavoidable.

The update behavior of PostgreSQL is probably the *last* serious issue.
Debate all you want, vacuum mitigates the problem to varying levels,
fixing the problem will be a huge win. If the update behavior gets fixed,
I can't think of a single issue with postgresql that would be a show
stopper.


>
> Rick
>
> On Jun 22, 2006, at 7:59 AM, Mark Woodward wrote:
>
>>> After a long battle with technology, pgsql@mohawksoft.com ("Mark
>>> Woodward"), an earthling, wrote:
>>>>> Clinging to sanity, pgsql@mohawksoft.com ("Mark Woodward")
>>>>> mumbled into
>>>>> her beard:
>> [snip]
>>>>>
>>>>> 1.  The index points to all the versions, until they get
>>>>> vacuumed out.
>>>>
>>>> It can't point to "all" versions, it points to the last "current"
>>>> version
>>>> as  updated by vacuum, or the first version of the row.
>>>
>>> No, it points to *all* the versions.
>>>
>>> Suppose I take a table with two rows:
>>>
>>> INFO:  analyzing "public.test"
>>> INFO:  "test": 1 pages, 2 rows sampled, 2 estimated total rows
>>> VACUUM
>>>
>>> Then, over and over, I remove and insert one entry with the same PK:
>>>
>>> sample=# delete from test where id = 2;insert into test (id)
>>> values (2);
>>> DELETE 1
>>
>> [snip]
>>>
>>> Now, I vacuum it.
>>>
>>> sample=# vacuum verbose analyze test;
>>> INFO:  vacuuming "public.test"
>>> INFO:  index "test_id_key" now contains 2 row versions in 2 pages
>>> DETAIL:  10 index row versions were removed.
>>> 0 index pages have been deleted, 0 are currently reusable.
>>> CPU 0.00s/0.00u sec elapsed 0.00 sec.
>>> INFO:  "test": removed 10 row versions in 1 pages
>>> DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
>>> INFO:  "test": found 10 removable, 2 nonremovable row versions in
>>> 1 pages
>>> DETAIL:  0 dead row versions cannot be removed yet.
>>> There were 0 unused item pointers.
>>> 0 pages are entirely empty.
>>> CPU 0.00s/0.00u sec elapsed 0.00 sec.
>>> INFO:  analyzing "public.test"
>>> INFO:  "test": 1 pages, 2 rows sampled, 2 estimated total rows
>>> VACUUM
>>>
>>> Notice that the index contained 10 versions of that one row.
>>>
>>> It pointed to *ALL* the versions.
>>
>> Hmm, OK, then the problem is more serious than I suspected.
>> This means that every index on a row has to be updated on every
>> transaction that modifies that row. Is that correct?
>>
>> I am attaching some code that shows the problem with regard to
>> applications such as web server session management, when run, each
>> second
>> the system can handle fewer and fewer connections. Here is a brief
>> output:
>>
>> markw@ent:~/pgfoo$ ./footest
>> 1307 sessions per second, elapsed: 1
>> 1292 sessions per second, elapsed: 2
>> 1287 sessions per second, elapsed: 3
>> ....
>> 1216 sessions per second, elapsed: 25
>> 1213 sessions per second, elapsed: 26
>> 1208 sessions per second, elapsed: 27
>> ....
>> 1192 sessions per second, elapsed: 36
>> 1184 sessions per second, elapsed: 37
>> 1183 sessions per second, elapsed: 38
>> ....
>> 1164 sessions per second, elapsed: 58
>> 1170 sessions per second, elapsed: 59
>> 1168 sessions per second, elapsed: 60
>>
>> As you can see, in about a minute at high load, this very simple table
>> lost about 10% of its performance, and I've seen worse based on update
>> frequency.  Before you say this is an obscure problem, I can tell
>> you it
>> isn't. I have worked with more than a few projects that had to
>> switch away
>> from PostgreSQL because of this behavior.
>>
>> Obviously this is not a problem with small sites, but this is a real
>> problem with an enterprise level web site with millions of visitors
>> and
>> actions a day. Quite frankly it is a classic example of something that
>> does not scale. The more and more updates there are, the higher the
>> load
>> becomes. You can see it on "top" as the footest program runs.
>>
>> There has to be a more linear way of handling this scenario.
>> <footest.c>
>>
>> ---------------------------(end of
>> broadcast)---------------------------
>> TIP 6: explain analyze is your friend
>



pgsql-hackers by date:

Previous
From: mark@mark.mielke.cc
Date:
Subject: Re: vacuum, performance, and MVCC
Next
From: Jan Wieck
Date:
Subject: Re: vacuum, performance, and MVCC