On 2023-02-06 20:04:39 +0100, Julian Backes wrote:
> I don't really understand what you mean by 'performance'. To me it is not
> surprising that incrementing (I know it is not just incrementing) a
> 64bit integer is faster than generating 128 bit data with a good amount of
> random data even if it seems to be too slow.
That's not really the problem with UUIDs, though. My (not very fast)
laptop can call getrandom() 1 million times per second (in a single
thread). Avoiding the system call[1] could make this fast enough to be
completely negligible compared to the time of writing a row to disk.
But UUIDs are random and that plays havoc with locality. For example
consider one table with invoices and another with invoice items. If you
want to get all the invoices including the items of a single day, the
data is probably nicely clustered together in the tables. But the join
needs to look up random ids in the index, which will be spread all over
the index. In a simple benchmark for this scenario the UUIDs were about
4.5 times slower than sequential ids. (In other benchmarks the
difference was only a few percent)
So depending on the querys the difference may be negligible or huge.
It really depends on your access patterns.
hp
[1] There was even a discussion about making that much faster on the
LKML recently.
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp@hjp.at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"