On Fri, Jul 7, 2017 at 3:45 AM, Alik Khilazhev
<a.khilazhev@postgrespro.ru> wrote:
> PostgreSQL shows very bad results in YCSB Workload A (50% SELECT and 50% UPDATE of random row by PK) on benchmarking
withbig number of clients using Zipfian distribution. MySQL also has decline but it is not significant as it is in
PostgreSQL.MongoDB does not have decline at all.
How is that possible? In a Zipfian distribution, no matter how big
the table is, almost all of the updates will be concentrated on a
handful of rows - and updates to any given row are necessarily
serialized, or so I would think. Maybe MongoDB can be fast there
since there are no transactions, so it can just lock the row slam in
the new value and unlock the row, all (I suppose) without writing WAL
or doing anything hard. But MySQL is going to have to hold the row
lock until transaction commit just like we do, or so I would think.
Is it just that their row locking is way faster than ours?
I'm more curious about why we're performing badly than I am about a
general-purpose random_zipfian function. :-)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company