The following discussion of locks in RAM finishes this series of articles. We will consider spinlocks, lightweight locks and buffer pins, as well as events monitoring tools and sampling.
We've already discussed some object-level locks (specifically, relation-level locks), as well as row-level locks with their connection to object-level locks and also explored wait queues, which are not always fair.
We have a hodgepodge this time. We'll start with deadlocks (actually, I planned to discuss them last time, but that article was excessively long in itself), then briefly review object-level locks left and finally discuss predicate locks.
Last time, we discussed object-level locks and in particular relation-level locks. In this article, we will see how row-level locks are organized in PostgreSQL and how they are used together with object-level locks. We will also talk of wait queues and of those who jumps the queue.
In this series, we will discuss locks.
This series will consist of four articles:
- Relation-level locks (this article).
- Row-level locks.
- Locks on other objects and predicate locks.
- Locks in RAM.
The material of all the articles is based on training courses on administration that Pavel Luzanov and I are creating (mostly in Russian, although one course is available in English), but does not repeat them verbatim and is intended for careful reading and self-experimenting.
Many thanks to Elena Indrupskaya for the translation of these articles into English.
General information on locks
PostgreSQL has a wide variety of techniques that serve to lock something (or are at least called so). Therefore, I will first explain in the most general terms why locks are needed at all, what kinds of them are available and how they differ from one another. Then we will figure out what of this variety is used in PostgreSQL and only after that we will start discussing different kinds of locks in detail.
In this blog post, we're explaining to those new to pg_profile why you might need this PostgreSQL extension in your daily work, on a real-world example.
So, we got acquainted with the structure of the buffer cache and in this context concluded that if all the RAM contents got lost due to failure, the write-ahead log (WAL) was required to recover. The size of the necessary WAL files and the recovery time are limited thanks to the checkpoint performed from time to time.
In the previous articles we already reviewed quite a few important settings that anyway relate to WAL. In this article (being the last in this series) we will discuss problems of WAL setup that are unaddressed yet: WAL levels and their purpose, as well as the reliability and performance of write-ahead logging.
The main WAL task is to ensure recovery after a failure. But once we have to maintain the log anyway, we can also adapt it to other tasks by adding some more information to it. There are several logging levels. The wal_level parameter specifies the level, and each next level includes everything that gets into WAL of the preceding level plus something new.
We already got acquainted with the structure of the buffer cache — one of the main objects of the shared memory — and concluded that to recover after failure when all the RAM contents get lost, the write-ahead log (WAL) must be maintained.
The problem yet unaddressed, where we left off last time, is that we are unaware of where to start playing back WAL records during the recovery. To begin from the beginning, as the King from Lewis Caroll's Alice advised, is not an option: it is impossible to keep all the WAL records from the server start — this is potentially both a huge memory size and equally huge duration of the recovery. We need such a point that is gradually moving forward and that we can start the recovery at (and safely remove all the previous WAL records, accordingly). And this is the checkpoint, to be discussed below.
What features must the checkpoint have? We must be sure that all the WAL records starting with the checkpoint will be applied to the pages flushed to disk. If it were not the case, during recovery, we could read from disk a version of the page that is too old, apply the WAL record to it and by doing so, irreversibly hurt the data.
Last time we got acquainted with the structure of an important component of the shared memory — the buffer cache. A risk of losing information from RAM is the main reason why we need techniques to recover data after failure. Now we will discuss these techniques.
Sadly, there's no such thing as miracles: to survive the loss of information in RAM, everything needed must be duly saved to disk (or other nonvolatile media).
Therefore, the following was done. Along with changing data, the log of these changes is maintained. When we change something on a page in the buffer cache, we create a record of this change in the log. The record contains the minimum information sufficient to redo the change if the need arises.
For this to work, the log record must obligatory get to disk before the changed page gets there. And this explains the name: write-ahead log (WAL).
In case of failure, the data on disk appear to be inconsistent: some pages were written earlier, and others later. But WAL remains, which we can read and redo the operations that were performed before the failure but their result was late to reach the disk.
The previous series addressed isolation and multiversion concurrency control, and now we start a new series: on write-ahead logging. To remind you, the material is based on training courses on administration that Pavel Luzanov and I are creating (mostly in Russian, although one course is available in English), but does not repeat them verbatim and is intended for careful reading and self-experimenting.
This series will consist of four parts:
- Buffer cache (this article).
- Write-ahead log — how it is structured and used to recover the data.
- Checkpoint and background writer — why we need them and how we set them up.
- WAL setup and tuning — levels and problems solved, reliability, and performance.
Many thanks to Elena Indrupskaya for the translation of these articles into English.
Why do we need write-ahead logging?
Part of the data that a DBMS works with is stored in RAM and gets written to disk (or other nonvolatile storage) asynchronously, i. e., writes are postponed for some time. The more infrequently this happens the less is the input/output and the faster the system operates.
But what will happen in case of failure, for example, power outage or an error in the code of the DBMS or operating system? All the contents of RAM will be lost, and only data written to disk will survive (disks are not immune to certain failures either, and only a backup copy can help if data on disk are affected). In general, it is possible to organize input/output in such a way that data on disk are always consistent, but this is complicated and not that much efficient (to my knowledge, only Firebird chose this option).
Usually, and specifically in PostgreSQL, data written to disk appear to be inconsistent, and when recovering after failure, special actions are required to restore data consistency. Write-ahead logging (WAL) is just a feature that makes it possible.
Now we've reached the last topic of this series. We will talk on the transaction id wraparound and freezing.
Transaction ID wraparound
PostgreSQL uses 32-bit transaction IDs. This is a pretty large number (about 4 billion), but with intensive work of the server, this number is not unlikely to get exhausted. For example: with the workload of 1000 transactions a second, this will happen as early as in one month and a half of continuous work.
But we've mentioned that multiversion concurrency control relies on the sequential numbering, which means that of two transactions the one with a smaller number can be considered to have started earlier. Therefore, it is clear that it is not an option to just reset the counter and start the numbering from scratch.
To remind you, we started with problems related to isolation, made a digression about low-level data structure, discussed row versions in detail and observed how data snapshots are obtained from row versions.
We've already mentioned that normally (i. e., when nothing holds the transaction horizon for a long time) VACUUM usually does its job. The problem is how often to call it.
If we vacuum a changing table too rarely, its size will grow more than desired. Besides, a next vacuum operation may require several passes through indexes if too many changes were done.
If we vacuum the table too often, the server will constantly do maintenance rather than useful work — and this is no good either.
Note that launching VACUUM on schedule by no means resolves the issue because the workload can change with time. If the table starts to change more intensively, it must be vacuumed more often.
Autovacuum is exactly the technique that enables us to launch vacuuming depending on how intensive the table changes are.
Last time we talked about HOT updates and in-page vacuuming, and today we'll proceed to a well-known vacuum vulgaris. Really, so much has already been written about it that I can hardly add anything new, but the beauty of a full picture requires sacrifice. So keep patience.
What does vacuum do?
In-page vacuum works fast, but frees only part of the space. It works within one table page and does not touch indexes.
The basic, "normal" vacuum is done using the VACUUM command, and we will call it just "vacuum" (leaving "autovacuum" for a separate discussion).
So, vacuum processes the entire table. It vacuums away not only dead tuples, but also references to them from all indexes.
Vacuuming is concurrent with other activities in the system. The table and indexes can be used in a regular way both for reads and updates (however, concurrent execution of commands such as CREATE INDEX, ALTER TABLE and some others is impossible).
Only those table pages are looked through where some activities took place. To detect them, the visibility map is used (to remind you, the map tracks those pages that contain pretty old tuples, which are visible in all data snapshots for sure). Only those pages are processed that are not tracked by the visibility map, and the map itself gets updated.
The free space map also gets updated in the process to reflect the extra free space in the pages.
Just to remind you, we already discussed issues related to isolation, made a digression regarding low-level data structure, and then explored row versions and observed how data snapshots are obtained from row versions.
Now we will proceed to two closely connected problems: in-page vacuum и HOT updates. Both techniques can be referred to optimizations; they are important, but virtually not covered in the documentation.
In-page vacuum during regular updates
When accessing a page for either an update or read, if PostgreSQL understands that the page is running out of space, it can do a fast in-page vacuum. This happens in either of the cases:
- A previous update in this page did not find enough space to allocate a new row version in the same page. Such a situation is remembered in the page header, and next time the page is vacuumed.
- The page is more than
fillfactorpercent full. In this case, vacuum is performed right away without putting off till next time.
To get things done before 2020 is over, I decided to publish my blog post with answers to your questions from our webinar on PostgreSQL 13 that we hosted in October. I’m still looking into release 13 and will soon give another talk on PG13 monitoring-related features that became available in this version of PostgreSQL. For now, let’s go through the webinar questions.
After having discussed isolation problems and having made a digression regarding the low-level data structure, last time we explored row versions and observed how different operations changed tuple header fields.
Now we will look at how consistent data snapshots are obtained from tuples.
What is a data snapshot?
Data pages can physically contain several versions of the same row. But each transaction must see only one (or none) version of each row, so that all of them make up a consistent picture of the data (in the sense of ACID) as of a certain point in time.
Isolation in PosgreSQL is based on snapshots: each transaction works with its own data snapshot, which "contains" data that were committed before the moment the snapshot was created and does not "contain" data that were not committed by that moment yet. We've already seen that although the resulting isolation appears stricter than required by the standard, it still has anomalies.